PAPER-I: MT-221 LINEAR ALGEBRA Panel of Authors Dr. K. C.

advertisement
A Text Book for S.Y.B.Sc./S.Y.B.A. Mathematics (2013 Pattern),
Savitribai Phule Pune University, Pune.
PAPER-I: MT-221
LINEAR ALGEBRA
Panel of Authors
Dr. K. C. Takale (Convenor)
RNC Arts, JDB Commerce and
NSC Science College, Nashik-Road.
B. B. Divate
H.P.T. Arts and R.Y.K. Science
College, Nashik.
K. S. Borase
RNC Arts, JDB Commerce and
NSC Science College, Nashik-Road.
Editors
Dr. P. M. Avhad
Dr. S. A. Katre
Conceptualized by Board of Studies(BOS) in Mathematics, Savitribai Phule Pune
University, Pune.
Preface
This text book is an initiative by the BOS Mathematics, Savitribai Phule Pune University, Pune. This book is based on a course linear algebra. We write this book as
per the revised syllabus of S.Y. B.Sc. Mathematics, by Savitribai Phule Pune University, Pune and Board of Studies in Mathematics, implemented from June 2014.
Linear algebra is the most useful subject in all branches of mathematics and it is
used extensively in applied mathematics and engineering.
In Chapter 1, we present the definitions of vector space, examples, linear dependence, basis and dimension, vector subspace, Necessary and sufficient condition for
subspace, vector space as a direct sum of subspaces, null space, range space, rank,
nullity, Sylvester Inequality and examples.
The second Chapter deals with the concept of Inner product, norm as length of a
vector, distance between two vectors, orthonormal basis, orthonormal projection,
Gram Schmidt process of ortogonalization and examples.
In the third Chapter, we study Linear Transformation, examples, properties of linear
transformations, equality of linear transformations, kernel and rank of linear transformations, composite transformations, Inverse of a linear transformation, Matrix
of a linear transformation, change of basis, similar matrices.
We welcome any opinions and suggestions which will improve the future editions
and help readers in future.
In case of queries/suggestions, send an email to: kalyanraotakale@rediffmail.com
-Authors
Acknowledgment
We sincerely thank the following University authorities (Savitribai Phule Pune University, Pune) for their constant motivation and valuable guidance in the preparation
of this book.
• Dr. W. N. Gade, Honorable Vice Chancellor, Savitribai Phule Pune University,
Pune.
• Dr. V. B. Gaikwad, Director BCUD, Savitribai Phule Pune University, Pune.
• Dr. K. C. Mohite, Dean, Faculty of Science, Savitribai Phule Pune University,
Pune.
• Dr. B. N. Waphare, Professor, Department of Mathematics, Savitribai Phule
Pune University, Pune.
• Dr. M. M. Shikare, Professor, Department of Mathematics, Savitribai Phule
Pune University, Pune.
• Dr. V. S. Kharat, Professor, Department of Mathematics, Savitribai Phule
Pune University, Pune.
• Dr. V. V. Joshi, Professor, Department of Mathematics, Savitribai Phule Pune
University, Pune.
• Mr. Dattatraya Kute, Senate Member, Savitribai Phule Pune University; Manager, Savitribai Phule Pune University, Pune Press.
• All the staff of Savitribai Phule Pune University, Pune press.
i
Syllabus: PAPER-I: MT-221: LINEAR ALGEBRA
1. Vector Spaces:
[16]
1.1 Definition, examples,
1.2 Vector subspace, Necessary and Sufficient condition for subspace.
1.3 Linear dependence and independence.
1.4 Basis and Dimension.
1.5 Vector space as a direct sum of subspaces.
1.6 Null space, Range space, Rank, Nullity, Sylvester’s Inequality.
2. Inner Product Spaces:
[16]
2.1 Definitions, examples and properties.
2.2 Norm as length of a vector, Distance between two vectors.
2.3 Orthonormal basis.
2.4 Orthonormal projection, Gram Schmidt processs of orthogonalization.
3. Linear Transformations:
[16]
3.1 Definitions and examples.
3.2 Properties of linear transformations.
3.3 Equality of linear transformations.
3.4 Kernel and Rank of linear transformations.
3.5 Composite transformations.
3.6 Inverse of a linear transformation.
3.7 Matrix of a linear transformation, Change of basis, Similar matrices.
Text book: Prepared by the BOS Mathematics, Savitribai Phule Pune University,
Pune.
ii
Recommended Book: Matrix and Linear Algebra aided with MATLAB, Kanti
Bhushan Datta, PHI Learning Pvt.Ltd, New Delhi(2009).
Sections:5.1,5.2,5.3,5.4,5.5,5.7,6.1,6.2,6.3,6.4
Reference Books:
1. Howard Anton, Chris Rorres., Elementary Linear Algebra, John Wiley and
Sons, Inc.
2. K. Hoffmann and R. Kunze Linear Algebra, Second Ed. Prentice Hall of India,
New Delhi, (1998).
3. S. Lang, Introduction to Linear Algebra, Second Ed. Springer-Verlag, New
Yark.
4. A. Ramchandra Rao and P. Bhimasankaran, Linear Algebra, Tata McGraw
Hill, New Delhi (1994).
5. G. Strang, Linear Algebra and its Applications. Third Ed. Harcourt Brace
Jovanovich, Orlando, (1988).
Contents
1
VECTOR SPACES
1.1 Introduction . . . . . . . . . . . . . . . . . .
1.2 Definitions and Examples of a Vector Space
1.3 Subspace . . . . . . . . . . . . . . . . . . . .
1.4 Linear Dependence and Independence . . . .
1.5 Basis and Dimension . . . . . . . . . . . . .
1.6 Vector Space as a Direct Sum of Subspaces .
1.7 Null Space and Range Space . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 INNER PRODUCT SPACES
2.1 Introduction . . . . . . . . . . . . . . . . . . . .
2.2 Definitions, Properties and Examples . . . . . .
2.3 Length (Norm), Distance in Inner product space
2.4 Angle and Orthogonality in Inner product space
2.5 Orthonormal basis and Orthonormal projection
2.6 Gram-Schmidt Process of Orthogonalization . .
.
.
.
.
.
.
.
.
.
.
.
.
.
3 LINEAR TRANSFORMATION
3.1 Introduction . . . . . . . . . . . . . . . . . . . . .
3.2 Definition and Examples of Linear Transformation
3.3 Properties of Linear Transformation . . . . . . . .
3.4 Equality of Two Linear Transformations . . . . .
3.5 Kernel And Rank of Linear Transformation . . . .
3.6 Composite Linear Transformation . . . . . . . . .
3.7 Inverse of Linear Transformation . . . . . . . . .
3.8 Matrix of Linear transformation . . . . . . . . . .
iii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
2
20
26
30
48
50
.
.
.
.
.
.
61
61
61
73
75
78
81
.
.
.
.
.
.
.
.
86
86
86
95
96
102
112
114
117
CONTENTS
3.8.1
3.9
Matrix of the sum of two linear transformations
multiple of a linear transformation . . . . . . .
3.8.2 Matrix of composite linear transformation . . .
3.8.3 Matrix of inverse transformation . . . . . . . . .
Change of Basis and Similar Matrices . . . . . . . . . .
iv
and a scalar
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
.
.
.
.
125
125
125
125
Chapter 1
VECTOR SPACES
1.1
Introduction
Vectors were first used about 1636 in 2D and 3D to describe geometrical operations
by Rene Descartes and Pierre de Fermat. In 1857 the notation of vectors and
matrices was unified by Arthur Cayley. Giuseppe Peano was the first to give
the modern definition of vector space in 1888, and Henri Lebesgue (about 1900)
applied this theory to describe functional spaces as vector spaces.
Linear algebra is the study of a certain algebraic structure called a vector space. A
good argument could be made that linear algebra is the most useful subject in all of
mathematics and that it exceeds even courses like calculus in its significance. It is
used extensively in applied mathematics and engineering. It is also fundamental in
pure mathematics areas like number theory, functional analysis, geometric measure
theory, and differential geometry. Even calculus cannot be correctly understood
without it. For example, the derivative of a function of many variables is an example of a linear transformation, and this is the way it must be understood as soon as
you consider functions of more than one variable. It is difficult to think a mathematical tool with more applications than vector spaces. Thanks to the contributors
1
CHAPTER 1.
VECTOR SPACES
2
of Linear Algebra as we may sum forces, control devices, model complex systems,
denoise images etc. They underlie all these processes and it is thank to them as
we can ”nicely” operate with vectors. These are the mathematical structures that
generalizes many other useful structures.
1.2
Definitions and Examples of a Vector Space
Definition 1.1. Field: A nonempty set F with two binary operations + (addition)
and . (multiplication) is called a field if it satisfying the following axioms for any a,
b, c ∈ F
1. a + b ∈ F (F is closed under +)
2. a + b = b + a ( + is commutative operation on F )
3. a + (b + c) = (a + b) + c ( + is associative operation on F )
4. Existence of zero element (additive identity): There exists 0 ∈ F such that
a + 0 = a.
5. Existence of negative element: For given a ∈ F , there exists −a ∈ F such that
a + (−a) = 0.
6. a.b ∈ F (F is closed under .)
7. a.b = b.a ( . is commutative operation on F )
8. (a.b).c = a . (b . c) ( . is associative operation on F )
9. Existence of Unity: There exists 1 ∈ F such that a.1 = a, ∀ a ∈ F .
10. For any nonzero a ∈ F , there exists a−1 =
1
a
∈ F such that a. a1 = 1.
11. Distributive Law: a.(b + c) = (a.b) + (a.c).
Example 1.1. (i) Set of rational numbers Q, set of real numbers R, set of complex
numbers C are all fields with respective usual addition and usual multiplication.
CHAPTER 1.
VECTOR SPACES
3
(ii) For prime number p, (Zp , +p , ×p ) is a field with respect to addition modulo p
and multiplication modulo p operation.
Definition 1.2. Vector Space: A nonempty set of vectors V with vector addition
(+) and scalar multiplication operation (.) is called a vector space over field F if it
satisfying the following axioms for any u, v, w ∈ V and α, β, ∈ F
1. C1: u + v ∈ V (V is closed under +)
2. C2: α.u ∈ V (V is closed under .)
3. A1: u + v = v + u ( + is commutative operation on V )
4. A2: (u + v) + w = u + (v + w) ( + is associative operation on V )
5. A3: Existence of zero vector: There exists a zero vector 0̄ ∈ V such that
u + 0̄ = u.
6. A4: Existence of negative vector: For given vector u ∈ V , there exists the vector
−u ∈ V such that u + (−u) = 0̄.
(This vector −u is called as negative vector of u in V .)
7. M1: α(u + v) = α.u + α.v
8. M2: (α + β).v = α.v + β.v
9. M3: (αβ).v = α.(β.v)
10. M4: 1.v = v
Remark 1.1. (i) The field of scalars is usually R or C and the vector space will
be called real or complex depending on whether the field is R or C. However,
other fields are also possible. For example, one could use the field of rational
numbers or even the field of the integers mod p for a prime p.
A vector space is also called a linear space.
(ii) A vector is not necessary of the form


v1
 ... 
vn
CHAPTER 1.
VECTOR SPACES
4
(iii) A vector (v1 , v2 , · · · , vn ) (in its most general form) is an element of a vector
space.
Example 1.2. Let V = R2 = {u = (u1 , u2 )/u1 , u2 ∈ R}.
For u = (u1 , u2 ) , v = (v1 , v2 ) ∈ V and α ∈ R,
u + v = (u1 + v1 , u2 + v2 ) and α.u = (αu1 , αu2 ).
Show that V is a real vector space with respect to defined operations.
Solution: Let u = (u1 , u2 ) , v = (v1 , v2 ) and w = (w1 , w2 ) ∈ V and α, β ∈ R, then
C1 : u + v = (u1 + v1 , u2 + v2 ) ∈ V
(∵ ui + vi ∈ R, ∀ ui , vi ∈ R)
C2 : α.u = (αu1 , αu2 ) ∈ V
(∵ αui ∈ R, ∀ ui , α ∈ R)
A1 : Commutativity:
u + v = (u1 + v1 , u2 + v2 )
= (v1 + u1 , v2 + u2 )
=v+u
(by definition of + on V )
(+ is commutative operation on R)
(by definition of + on V ).
Thus, A1 holds.
A2 : Associativity:
(u + v) + w = (u1 + v1 , u2 + v2 ) + (w1 , w2 )
(by definition of + on V )
= ((u1 + v1 ) + w1 , (u2 + v2 ) + w2 )
(by definition of + on V )
= (u1 + (v1 + w1 ), u2 + (v2 + w2 ))
(+ is associativity on R)
= (u1 , u2 ) + (v1 + w1 , v2 + w2 )
(by definition of + on V )
= u + (v + w)
(by definition of + on V ).
Thus, A2 holds.
A3 : Existance of zero vector: For 0 ∈ R there exists 0̄ = (0, 0) ∈ R2 = V such
that
u + 0̄ = (u1 , u2 ) + (0, 0) = (u1 + 0, u2 + 0) = (u1 , u2 ) = u.
Therefore, 0̄ = (0, 0) ∈ R2 is zero vector.
A4 : Existence of negative vector: For u = (u1 , u2 ) ∈ R2 there exists
−u = (−u1 , −u2 ) ∈ R2 such that u + (−u) = 0̄.
Therefore, each vector in V has negative vector and hence A4 holds.
CHAPTER 1.
VECTOR SPACES
5
M1 :
α.(u + v) = α.(u1 + v1 , u2 + v2 )
= (α(u1 + v1 ), α(u2 + v2 ))
(by definition of . onV )
= (αu1 + αv1 , αu2 + αv2 )
(by distributive law in F = R)
= (αu1 , αu2 ) + (αv1 , αv2 )
= α(u1 , u2 ) + α(v1 , v2 )
(by definition of + onV )
(by definition of . onV )
= α.u + α.v
Thus, M1 holds.
M2 :
(α + β)u = (α + β).(u1 , u2 )
= ((α + β)u1 , (α + β)u2 )
= (αu1 + βu1 , αu2 + βu2 )
= (αu1 , αu2 ) + (βu1 , βv2 )
= α.(u1 , u2 ) + β.(u1 , u2 )
(by definition of . onV )
(by distributive law in F = R)
(by definition of + onV )
(by definition of . onV )
= α.u + β.v.
Thus, M2 holds.
M3 :
(αβ).u = (αβ).(u1 , u2 )
= ((αβ).u1 , (αβ).u2 )
(by definition of . onV )
= (α(βu1 ), α(βu2 ))
(by associative law on F = R)
= α(βu1 , βu2 )
(by definition of . onV )
= α.(β.u)
Thus, M3 holds.
M4 : For 1 ∈ R,
1.u = (1.u1 , 1.u2 ) = (u1 , u2 ) = u.
Therefore, M 4 holds.
CHAPTER 1.
VECTOR SPACES
6
Therefore, all vector space axioms are satisfied.
∴ V = R2 is a real vector space with respect to defined operations.
Example 1.3. Let V = Rn = {u = (u1 , u2 , · · · , un )/ui ∈ R}. For u, v ∈ Rn and
α ∈ R, the vector operations are defined as follows
u + v = (u1 + v1 , u2 + v2 , · · · , un + vn )
α.u = (αu1 , αu2 , · · · , αun )
then V is a real vector space. This vector space is called as an Euclidean vector
space.
The geometrical representation of a real vector space R3 is given as follows:
Example 1.4. V = The set of all real-valued functions and F = R.
⇒ vector = f (t).
Example 1.5. Force fields in Physics:
Consider V to be the set of all arrows (directed line segments) in 3D. Two arrows
are regarded as equal if they have the same length and direction. Define the sum of
arrows and the multiplication by a scalar as shown below:
CHAPTER 1.
VECTOR SPACES
7
The Parallelogram rule.
Example 1.6. The graphical representation of commutative and associative properties of vector addition.
u+v =v+u
(u + v) + w = u + (v + w)
Example 1.7. Polynomials of degree ≤ n (Pn ) :
Let (Pn ) be the set of all polynomials of degree n and u(x) = u0 +u1 x+u2 x2 +...+un xn .
Define the sum among two vectors and the multiplication by a scalar as
(u + v)(x) = u(x) + v(x)
= (u0 + v0 ) + (u1 + v1 )x + (u2 + v2 )x2 + ... + (un + vn )xn
and (αu)(x) = αu0 + αu1 x + αu2 x2 + ... + αun xn
Legendre polynomials.
CHAPTER 1.
VECTOR SPACES
8
Example 1.8. V = The set of all 2 × 2 matrices and F = R.
(
)
v11 v12
⇒ vector =
v21 v22
Example 1.9. Let V = R/C, then V is a real vector space with respect to usual
addition of real numbers and usual multiplication of real numbers.
Example 1.10. Let V = Q/R/C, then V is a real vector space over field Q.
Example 1.11. Let V = C, then V is a complex vector space.
Example 1.12. Let V = {x ∈ R/x > 0} = R+ . If for x, y ∈ V and α ∈ R, the
vector operations are defined as follows as
x + y = xy and α.x = xα
then show that V is a real vector space.
Solution: Let x, y, z ∈ V = R+ and α, β ∈ R.
C1: Closureness w.r.t. addition
x + y = xy ∈ R+
C2: Closureness w.r.t. scalar multiplication
α.x = xα ∈ R+
A1: Commutative law for addition
x + y = xy
= yx
=y+x
A2: Associative law for addition
x + (y + z) = x(yz)
= (xy)z
= (x + y) + z
CHAPTER 1.
VECTOR SPACES
9
A3: Existence of zero vector
Suppose for x ∈ V there is y such that
x+y =x
xy = x
∴ y = 1 as x ̸= 0
∴ y = 1 ∈ R+ is such that x + 1 = x, ∀ x ∈ R+
∴ 1 ∈ R+ is zero vector.
A4: Existence of negative vector
Suppose for x ∈ V there is y such that
x+y =1
xy = 1
1
∴y=
∈ R+ as x > 0
x
1
1
∴ For given x ∈ R+ there exists
∈ R+ such that x + = 1.
x
x
M1:
α.(x + y) = (xy)α
= xα y α
= α.x + α.y
M2:
(α + β).x = xα+β
= xα xβ
= α.x + β.x
M3:
α.(β.x) = (xβ )α
= xβα
= xαβ
= (αβ).x
CHAPTER 1.
VECTOR SPACES
10
M4:
1.x = x1
=x
Thus V = R+ satisfy all axioms of a vector space over R with respect to defined
operations. Therefore, V = R+ is a real vector space.
Note:
(i) The set V = {x ∈ R/x ≥ 0} is not a vector space with respect to above defined
operations because 0 has no negative element.
(ii) V = R is not a vector space over R with respect to above defined operation
because C2 fails for x < 0, α = 12 .
Example 1.13. Let V = Mm×n (R) = Set of all m × n matrices with real entries.
Then V is a real vector space with respect to usual addition of matrices and usual
scalar multiplication.
Example 1.14. Let V = Pn = P (x) = {a0 + a1 x + a2 x2 + · · · an xn /ai ∈ R} = Set
of all polynomials of degree ≤ n. Then V is a real vector space with respect to usual
addition of polynomials and usual scalar multiplication.
Example 1.15. Let V = X ⊆ R, V = F [x] = {f /f : X → R is a function}. Then
V is a real vector space with respect to usual addition of real valued functions and
usual scalar multiplication.
Example 1.16. Let V = {0̄}. For 0̄ ∈ V and α ∈ F , we define 0̄ + 0̄ = 0̄ and
α.0̄ = 0̄, Then V is a vector space over field F .
Note that the vector space V is called the trivial vector space over field F .
Theorem 1.1. Let V be a vector space over field F .Then for any u ∈ V and α ∈ F
(i) 0.u = 0̄, for 0 ∈ F .
(ii) α.0̄ = 0̄
(iii) (−1).u = −u and
(iv) If α.u = 0̄ then α = 0 or u = 0̄.
CHAPTER 1.
Proof.
VECTOR SPACES
11
(i)
(by property of 0 ∈ F )
0.u = (0 + 0).u
= 0.u + 0.u
0.u + (−0.u) = (0.u + 0.u) + (−0.u)
∴ 0̄ = 0.u + [0.u + (−0.u)]
(by property M2 )
(adding (−0.u) on both sides)
(by property A4 and A2 )
= 0.u + 0̄
= 0.u
(by property A4 )
(by property A3 )
∴ 0.u = 0̄ is proved.
(ii)
α.0̄ = α.(0̄ + 0̄)
= α.0̄ + α.0̄
α.0̄ + (−α.0̄) = (α.0̄ + α.0̄) + (−α.0̄)
(by property M1 )
(adding (−α.0̄) on both sides)
∴ 0̄ = α.0̄ + [α.0̄ + (−α.0̄)]
(by property A4 and A2 )
= α.0̄ + 0̄
(by property A4 )
= α.0̄
(by property A3 )
∴ α.0̄ = 0̄ is proved.
(iii) To show that (−1).u = −u
Consider (−1).u + u = (−1).u + 1.u
(by property M4 )
= (−1 + 1).u
= 0.u = 0̄
∴ (−1).u is a negative vector of u
Hence, (−1).u = −u
(iv) To show that α.u = 0̄ ⇒ α = 0 or u = 0̄.
If α = 0 then α.u = 0.u = 0̄
(by uniqueness of negative vector)
CHAPTER 1.
VECTOR SPACES
12
If α ̸= 0 then
1
1
.(α.u) = .0̄
α
α
1
( .α).u = 0̄
α
1.u = 0̄
∴ u = 0̄
Hence, the proof is completed.
Illustrations
Example 1.17. Let V = R3 , define operations + and . as
(x, y, z) + (x′ , y ′ , z ′ ) = (x + x′ , y + y ′ , z + z ′ )
α.(x, y, z) = (αx, y, z)
Is V a real vector space?
Solution: Clearly, C1 , C2 , A1 , A2 , A3 , and A4 are obvious.
M1: Let u = (x, y, z) and v = (x′ , y ′ , z ′ ) ∈ V
Then, u + v = (x + x′ , y + y ′ , z + z ′ )
α.(u + v) = α.(x + x′ , y + y ′ , z + z ′ )
= (α(x + x′ ), y + y ′ , z + z ′ )
= (αx + αx′ , y + y ′ , z + z ′ )
= (αx, y, z) + (αx′ , y ′ , z ′ )
= α.(x, y, z) + α.(x′ , y ′ , z ′ )
= α.u + α.v
M2: Let α, β ∈ R and (x, y, z) ∈ R3 then
(α + β).u = (α + β).(x, y, z)
= [(α + β).x, y, z)]
(α + β).u = α.(x, y, z) + β.(x, y, z)
= (α.x, y, z) + (β.x, y, z)
= (α.x + β.x, 2y, 2z)
CHAPTER 1.
VECTOR SPACES
13
From this, we have
(α + β).u ̸= α.u + β.u
M3: Let
α.(β.u) = α.(βx, y, z)
= (αβx, y, z)
= αβ.(x, y, z)
= (αβ).u
M4: Let
1.u = (1.x, y, z)
= (x, y, z)
=u
∴ 1.u. = u
Thus V = R3 satisfy all axioms except M2 of a vector space over field R with respect
to defined operations.
Therefore, V = R3 is not real vector space.
Example 1.18. Let V = R3 , for u = (x, y, z), v = (x′ , y ′ , z ′ ) and α ∈ R, we define
+ and . as follows
u + v = (x + x′ , y + y ′ , z + z ′ )
α.u = (0, 0, 0)
Then 1.u ̸= u, for u ̸= 0̄
∴ M4 is not satisfied.
∴ V = R3 is not a vector space.
Example 1.19. Let V = R3 , for u = (x, y, z), v = (x′ , y ′ , z ′ ) and α ∈ R, we define
CHAPTER 1.
VECTOR SPACES
14
+ and . as follows
u + v = (x + x′ , y + y ′ , z + z ′ )
α.u = (2αx, 2αy, 2αz)
Then α(β.u) ̸= (αβ).u, for u ̸= 0̄
and 1.u ̸= u for u ̸= 0̄
∴ V = R3 is not a vector space.
Example 1.20. Let V = {x ∈ R/x ≥ 0} is not a vector space with respect to usual
addition and scalar multiplication, because existence of negative vector axiom fails.
Example 1.21. Let V = R2 , and for u = (x, y), v = (x′ , y ′ ) and α ∈ R, we define
+ and . as follows
u + v = (x + x′ + 1, y + y ′ + 1)
α.(x, y) = (αx, αy)
Is V a real vector space?
Solution: Clearly, C1 , C2 , A1 and A2 are hold.
A3: Let
u+v =u
⇒ v = (−1, −1) = 0̄
i.e. u + 0̄ = u, where 0̄ = (−1, −1)
∴ A3 hold.
A4: Let
u + v = 0̄
⇒ x + x′ + 1 = −1 y + y ′ + 1 = −1
∴ x′ = −x − 2, y ′ = −y − 2
∴ v = (−x − 2, −y − 2) = −u, is negative vector of u ∈ R2
∴ u + v = 0̄, where v = (−x − 2, −y − 2)
∴ A4 hold.
CHAPTER 1.
VECTOR SPACES
15
M1: Let
α.(u + v) = α.(x + x′ + 1, y + y ′ + 1)
= (α(x + x′ + 1), α(y + y ′ + 1)
= (αx + αx′ + α, αy + αy ′ + α)
Also, α.u + α.v = (αx + αx′ + 1, αy + αy ′ + 1)
(1.1)
(1.2)
From (1.1) and (1.2), we get
∴ α.(u + v) ̸= α.u + α.v in general for α ̸= 1.
∴ M 1 fail.
M2: Similarly,
(α + β).u ̸= α.u + β.u
∴ M 2 fail.
Therefore, V = R2 is not a vector space.
{
(
)/
}
a 1
Example 1.22. Let V = A =
a, b ∈ R is not a vector space with
1 b
respect to usual addition and usual scalar multiplication as C1 and C2 fail.
{
(
)/
}
a
a+b
Example 1.23. Let V = A =
a, b ∈ R is a vector space with
a+b
b
respect to usual addition and usual scalar multiplication.
Example 1.24. Let V = {(1, x)/x ∈ R}, and for u = (1, x), v = (1, y) ∈ V and
α ∈ R, we define + and . as follows
u + v = (1, x + y)
α.u = (1, αx)
Show that V is a real vector space.
Solution: Clearly, C1 , C2 , A1 and A2 are hold.
CHAPTER 1.
VECTOR SPACES
16
A3: Let
u+v =u
(1, x + y) = (1, x)
⇒x+y =x
⇒y=0
∴ v = (1, 0) ∈ V is zero
∴ A3 hold.
A4: Let
u + v = (1, 0)
⇒ (1, x + y) = (1, 0)
∴ x+y =0
⇒ y = −x
∴ v = (1, −x) = −u, is negative vector of u ∈ V .
∴ A4 hold.
M1: Let
α.(u + v) = α.(1, x + y) = (1, αx + αy)
α.u + α.v = (1, αx) + (1, αy) = (1, αx + αy)
∴ α.(u + v) = α.u + α.v
∴ M 1 holds.
M2:
(α + β).u = (1, (α + β)x) = (1, αx + βx)
= (1, αx) + (1, βx)
= α(1, x) + β(1, x)
= α.u + β.u
∴ M 2 holds.
CHAPTER 1.
VECTOR SPACES
17
M3:
α.(β.u) = α(1, βx) = (1, αβx)
= (αβ)(1, x)
= (αβ).u
∴ M3 holds.
M4:
1.u = (1, 1.x) = (1, x)
=u
∴ M4 holds.
Therefore, V is a real vector space.
Example 1.25. Let V = R2 , and for u = (x, y), v = (x′ , y ′ ) ∈ R2 and α ∈ R, we
define + and . as follows
u + v = (xx′ , yy ′ )
α.u = (αx, αy)
Show that V = R2 is not a real vector space.
Solution: Clearly, C1 , C2 , A1 and A2 are hold.
A3: Let
u+v =u
(xx′ , yy ′ ) = (x, y)
⇒ xx′ = x and yy ′ = y
⇒ x′ = 1, y ′ = 1
∴ 0̄ = (1, 1) ∈ V = R2 is a zero element.
∴ A3 hold.
CHAPTER 1.
VECTOR SPACES
18
A4: Let
u + v = (1, 1)
⇒ (xx′ , yy ′ ) = (1, 1)
1
1
∴ x′ = , y ′ = , (x ̸= 0, y ̸= 0)
x
y
∴ (0, 0) has no negative vector.
∴ A4 does not hold.
M1: Let
α.(u + v) = α.(xx′ , yy ′ ) = (αxx′ , αyy ′ )
and α.u + α.v = (αx, αy) + (αx′ , αy ′ ) = (α2 xx′ , α2 yy ′ )
∴ α.(u + v) ̸= α.u + α.v
∴ M1 does not hold.
M2:
(α + β).u = ((α + β)x, (α + β)y)
and α.u + β.u = (αx, αy) + (βx, βy)
= (αβx2 , αβy 2 )
∴ (α + β).u ̸= α.u + β.u
∴ M 2 fail.
M3:
α.(β.u) = α(βx, βy) = (αβx, αβy)
and (αβ).u = (αβx, αβy)
= (αβ).u
∴ M3 hold.
M4:
1.u = (1.x, 1.y) = (x, y)
=u
∴ M4 hold.
CHAPTER 1.
VECTOR SPACES
19
Therefore, V is not a real vector space.
{
(
)/
}
a b
Example 1.26. Let V = A =
A−1 exist or |A| ̸= 0 is not a vector
c d
space with respect to usual addition and usual scalar multiplication as C1 , A1 , A3
and A4 fails.
{
(
)/
}
(
)
a 1
a1 1
Example 1.27. Let V = A =
a, b ∈ R and for A1 =
1 b
1 b1
(
)
(
)
(
)
a2 1
a1 + a2
1
αa1 1
A2 =
we define A1 +A2 =
and α.A =
.
1 b2
1
b1 + b2
1 αb1
Then show that V is a real vector space.
Example 1.28. Let V = R2 , and for u = (x, y), v = (x′ , y ′ ) ∈ R2 and α ∈ R, we
define
u + v = (x + x′ , y + y ′ )
α.u = (αx, 0)
Then only M4 fail, therefore, V = R2 is not a real vector space. This is called a
weak vector space.
Example 1.29. Show that all points of R2 lying on a line is a vector space with
respect to standard operation of a vector addition and scalar multiplication, exactly
when line passes through the origin.
Solution: Let W = {(x, y)/y = mx} then W represent the line passing through
origin with slope m, that is the line passing through origin is a set
Wm = {(x, mx)/f or some x ∈ R}.
Then Wm is a vector space.
Exercise:1.1
(i) Determine which of following sets are vector spaces under the given operations.
For those that are not, list all axioms that fail to hold.
(a) Set R3 with the operations + and . defined as follows
(x, y, z) + (x′ , y ′ , z ′ ) = (x + x′ , y + y ′ , z + z ′ ), α.(x, y, z) = (αx, y, z).
CHAPTER 1.
VECTOR SPACES
20
(b) Set R3 with the operations + and . defined as follows
(x, y, z) + (x′ , y ′ , z ′ ) = (x + x′ , y + y ′ , z + z ′ ), α.(x, y, z) = (0, αy, αz)
(c) Set R3 with the operations + and . defined as follows
(x, y, z) + (x′ , y ′ , z ′ ) = (|x + x′ |, y + y ′ , z + z ′ ), α.(x, y, z) = (αx, αy, αz)
(d) Set R2 with the operations + and . defined as follows
(x, y) + (x′ , y ′ ) = (x + x′ + 2, y + y ′ ), α.(x, y) = (αx, αy).
(e) Set R2 with the operations + and . defined as follows
(x, y) + (x′ , y ′ ) = (x + 2x′ , 3y + y ′ ), α.(x, y) = (0, αy).
(f) Set R2 with the operations + and . defined as follows
(x, y) + (x′ , y ′ ) = (|xx′ |, yy ′ ), α.(x, y) = (αx, αy).
(g) Set V = {Sun} with the operations + and . defined as follows
Sun + Sun = Sun, α.Sun = Sun
(h) Set V = {(x, x, x, ..., x)/x ∈ R} ⊆ Rn with the standard operations on Rn
(ii) Let Ω be a nonempty set and let V consist of all functions defined on Ω which
have values in some field F . The vector operations are defined as follows
(f + g)(x) = f (x) + g(x)
(α.f )(x) = αf (x)
for any f , g ∈ V and any scalar α ∈ R. Then verify that V with these
operations is a vector space.
(iii) Using the axioms of a vector space V , prove the following for u, v, w ∈ V :
(a) u + w = v + w implies u = v.
(b) αu = βu and u ̸= 0̄ implies α = β, for any α, β ∈ R.
(c) αu = αv and α ̸= 0 implies u = v, for any α ∈ R.
1.3
Subspace
Definition 1.3. A nonempty subset W of a vector space V is said to be a subspace
of V if W itself is a vector space with respect to operations defined on V .
CHAPTER 1.
VECTOR SPACES
21
Remark 1.2. Any subset which does not contain the zero vector cannot be a subspace
because it won′ t be a vector space.
Necessary and Sufficient condition for subspace:
Theorem 1.2. A non-empty subset W of a vector space V is a subspace of V if and
only if W satisfy the following
C1 : w1 + w2 ∈ W, ∀ w1 , w2 ∈ W
C2 : kw ∈ W, ∀ w ∈ W and k in F.
Proof. (i) Necessary Condition: If W is a subspace of a vector space V then W
itself is a vector space with respect to operations defined on V .
∴ W satisfy C1 and C2 as required.
(ii) Sufficient Condition: Conversely, suppose W is a non-empty subset of vector
space V satisfying C1 and C2 then
(a) For 0 ∈ R, w ∈ W
0.w = 0̄ ∈ W.
∴ A3 is satisfied.
(b) For w ∈ W, −1 ∈ R
−1.w = −w ∈ W.
∴ A4 is satisfied.
Now as commutative, associative and distributive laws are inherited from superset
V to subset W .
∴ A1 , A2 , M1 , M2 , M3 , M4 are hold in W .
Thus, W satisfy all axioms of vector space.
∴ W is a subspace of V .
Theorem 1.3. A non-empty subset W of a vector space V is a subspace of V if and
only if αw1 + βw2 ∈ W ∀α, β ∈ F , w1 , w2 ∈ W .
Proof. Suppose W is subspace of a vector space V then W itself is a vector space
with respect to operations defined on V .
∴ by C2 , for α, β ∈ F , w1 , w2 ∈ W
∴ αw1 , βw2 ∈ W
CHAPTER 1.
VECTOR SPACES
22
Therefore, by C1
αw1 + βw2 ∈ W ∀α, β ∈ F, w1 , w2 ∈ W
Conversely, suppose W is a non-empty subset of vector space V such that
αw1 + βw2 ∈ W, ∀α, β ∈ F, w1 , w2 ∈ W
Then
(i) for α, β = 1, we get
αw1 + βw2 = 1.w1 + 1.w2
= w1 + w2 ∈ W
∴ C1 hold.
(ii) For α ∈ F , β = 0 ∈ F
αw1 + βw2 = α.w1 + 0.w2
= α.w1 + 0̄
= α.w1 ∈ W
∴ α.w1 ∈ W
∴ C2 hold.
For α, β = 0
αw1 + βw2 = 0.w1 + 0.w2
= 0̄ + 0̄
= 0̄ ∈ W
∴ 0̄ ∈ W
∴ A3 is satisfied.
For α = −1 ∈ F , β = 0 ∈ F
αw1 + βw2 = −1.w1 + 0.w2
= −w1 + 0̄
= −w1 ∈ W
∴ − w1 ∈ W, ∀w1 ∈ W
∴ A4 is satisfied.
CHAPTER 1.
VECTOR SPACES
23
Since, commutative, associative and distributive laws are inherited from superset V
to subset W .
∴ A1 , A2 , M1 , M2 , M3 , M4 are hold in W .
Thus, W satisfy all axioms of vector space.
∴ W is a subspace of V .
Definition 1.4. Linear Combination:
Let V be a vector space and S = {v1 , v2 , · · · vn } ⊆ V . Then for α1 , α2 , · · · αn ∈ F ,
w = α1 v1 + α2 v2 · · · + αn vn
is called linear combination
of vectors
v1 , v2 , · · · vn .
{ n
}
∑
The set L(S) =
αi vi /αi ∈ F is called linear span of a set S.
i=1
Example 1.30. In R3 if S = {e1 , e2 , e3 } where
e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1)
Then (x, y, z) = xe1 + ye2 + ze3 , ∀(x, y, z) ∈ R3 .
Therefore, L(S) = R3 .
Example 1.31. The polynomials 1, x, x2 , · · · , xn span Pn since every polynomial in
Pn is written as a0 + a1 x + a2 x2 + · · · + an xn .
Theorem 1.4. If S is nonempty subset of a vector space V , then L(S) is the smallest
subspace of V containing S.
{ n
}
∑
Proof. Let S = {v1 , v2 , · · · , vn } ⊆ V , then L(S) =
αi vi /αi ∈ F ⊆ V .
For each αi = 0,
n
∑
i=1
αi vi = 0̄ ∈ L(S).
i=1
∴ L(S) ̸= ϕ
(1.3)
Moreover, for i ∈ {1, 2, · · · , n}, if αi = 1 and αj = 0, j ̸= i, then we have
n
∑
αj vj = vi ∈ L(S) ∀ i.
j=1,j̸=i
∴ S ⊆ L(S).
(1.4)
CHAPTER 1.
VECTOR SPACES
24
Let w1 , w2 ∈ L(S), then
w1 =
n
∑
αi vi , w2 =
i=1
∴ w1 + w2 =
n
∑
βi vi for some αi′ s , βi′ s ∈ F.
i=1
n
∑
(αi + βi)vi
i=1
∴ w1 + w2 ∈ L(S), as αi + βi ∈ F, ∀ i.
(1.5)
Moreover,
k.w = k
n
∑
αi vi =
i=1
n
∑
kαi vi =
i=1
n
∑
αi′ vi
i=1
∴ kw ∈ L(S).
(1.6)
From equations (1.3), (1.4), (1.5) and (1.6), L(S) is the subspace of V containing S.
Now, suppose W is other subspace of V containing S.
n
∑
Then for v1 , v2 , · · · vn ∈ S ⊂ W ,
αi vi ∈ L(S) ⊂ W for any αi′ s ∈ R.
i=1
∴ L(S) ⊂ W
Therefore, L(S) is smallest subspace of V containing S.
Hence, proof is completed.
Example 1.32. Subspaces of R2
1. W = {0̄} is a subspace.
2. W = R2
3. Lines through origin.
Example 1.33. Subspaces of R3
1. {0̄}
2. W = R3
3. Lines through origin.
4. Any plane in 3D passing through the origin is a subspace of R3 .
Any plane in 3D not passing through the origin is not a subspace of R3 , because 0
does not belong to the plane.
CHAPTER 1.
VECTOR SPACES
25
Example 1.34. W = R2 is not a subspace of V = R3 . However, the set
W = {(x, y, 0)/x ∈ R and y ∈ R}
is a subspace of R3 , it acts like R2 .
Example 1.35. Subspaces of Mnn (R)
1. {0̄}
2. Mnn (R)
3. Set of all upper triangular matrices.
4. Set of all lower triangular matrices.
5. Set of all diagonal matrices.
6. Set of all scalar matrices.
5. Set of all symmetric matrices.
Example 1.36. If AX = 0̄ is a homogeneous system of m linear equations in n
unknowns then show that set of all solutions of the system is a subspace of Rn .
Solution: Let W = {X ∈ Rn /AX = 0̄}. Then for 0̄ ∈ Rn , A0̄ = 0̄.
∴ 0̄ ∈ W and W ̸= ϕ
(1.7)
Let X1 , X2 ∈ W and α β ∈ R then
A(αX1 + βX2 ) = αAX1 + βAX2
= α0̄ + β 0̄
= 0̄
∴ αX1 + βX2 ∈ W, ∀ α, β ∈ R, X1 , X2 ∈ W
(1.8)
Therefore, from (1.7) and (1.8) and sufficient condition for subspace, W is a subspace
of Rn .
Exercise: 1.2
(i) Determine which of following are subspaces of R3 .
(a) Set of all vectors of the form (x, 0, 0)
(b) Set of all vectors of the form (x, 1, 2)
(c) Set of all vectors of the form (x, y, z), where y = x + z
CHAPTER 1.
VECTOR SPACES
26
(d) Set of all vectors of the form (x, y, z), where y = x + z + 2
(ii) Determine which of following are subspaces of M22 (R).
(a) Set of all 2 × 2 matrices with rational entries
{
(
)
}
a b
(b) W = A =
∈ M22 (R)/a + 2b − c + d = 0
c d
{
(
)
}
a b
(c) Wd = A =
∈ M22 (R)/|A| = 0 .
c d
(iii) Determine which of following are subspaces of P3 .
(a) W = {p(x) = a0 + a1 x + a2 x2 + a3 x3 ∈ P3 /a0 = 0}
(b) W = {p(x) = a0 + a1 x + a2 x2 + a3 x3 ∈ P3 /a0 + a1 + 2a2 + 3a3 = 0}
(c) W = {p(x) = a0 + a1 x + a2 x2 + a3 x3 ∈ P3 /a0 is an integer}
(d) W = {p(x) = a0 + a1 x ∈ P3 /a0 , a1 ∈ R}
(iv) Determine which of following are subspaces of F (−∞, ∞).
(a) W = {f ∈ F (−∞, ∞)/f (x) ≤ 0}
(b) W = {f ∈ F (−∞, ∞)/f (0) = 0}
(c) W = {f ∈ F (−∞, ∞)/f (0) = 4}
(d) W = {f (x) = a0 + a1 x ∈ F (−∞, ∞)/a0 , a1 ∈ R}.
(v) Determine which of following are subspaces of Mnn (R).
(a) W = {A ∈ Mnn (R)/tr.(A) = 0}
(b) W = {A ∈ Mnn (R)/At = A}
(c) W = {A ∈ Mnn (R)/At = −A}
(d) W = {A ∈ Mnn (R)/AX = 0 has only trivial solution}.
1.4
Linear Dependence and Independence
Definition 1.5. Linearly independent and Linearly dependent set
A non-empty set S = {v1 , v2 , · · · , vn } of a vector space V is said to be linearly
CHAPTER 1.
VECTOR SPACES
independent if
n
∑
27
αi vi = 0̄ ⇒ αi = 0, ∀ i = 1, 2, · · · , n.
i=1
If there exist some nonzero values among α1 , α2 , · · · , αn such that
n
∑
αi vi = 0̄
i=1
then set S is said to be linearly dependent set in the vector space V .
Example 1.37. Show that the set of vectors S = {(1, 2, 0), (0, 3, 1), (−1, 0, 1)} is
linearly independent in an Euclidean space R3 .
Solution: For linear independence, we consider
a(1, 2, 0) + b(0, 3, 1)+c(−1, 0, 1) = 0̄
⇒a−c=0
2a + 3b = 0
b+c=0
Solving these equations, we get
a = 0, b = 0, c = 0.
Therefore, set of vectors S = {(1, 2, 0), (0, 3, 1), (−1, 0, 1)} is linearly independent
set in R3 .
Example 1.38. Show that the set of vectors S = {(1, 3, 2), (1, −7, −8), (2, 1, −1)}
is a linearly dependent set in R3 .
Solution: For linear dependence, we consider
a(1, 3, 2) + b(1, −7, −8)+c(2, 1, −1) = 0̄
⇒ a + b + 2c = 0
3a − 7b + c = 0
2a − 8b − c = 0
This homogeneous system has a nonzero solution as follows
a = 3, b = 1, c = −2.
Therefore, set of vectors S = {(1, 3, 2), (1, −7, −8), (2, 1, −1)} is linearly dependent
set in R3 .
CHAPTER 1.
VECTOR SPACES
28
Example 1.39. Find t, for which u = (cost, sint), v = (−sint, cost) forms a
linearly independent set in R2 .
Solution: For linear independence, we consider
αu + βv = 0̄
α(cost, sint) + β(−sint,cost) = 0̄
⇒ αcost − βsint = 0
αsint + βcost = 0
The determinant of the coefficient matrix of this homogeneous system is
cost −sint
2
2
sint cost = cos t + sin t = 1 ̸= 0, ∀ t ∈ R.
The homogeneous system has only trivial solution i.e. α = 0 and β = 0.
Therefore, the given vectors are linearly independent for any scalar t.
Example 1.40. Find t, for which u = (cost, sint), v = (sint, cost) form a linearly
independent set in R2 .
Solution: For linear independence, we consider
αu + βv = 0̄
α(cost, sint) + β(sint,cost) = 0̄
⇒ αcost + βsint = 0
αsint + βcost = 0
The determinant of the coefficient matrix of this homogeneous system is
}
{
cost sint
(2n
−
1)π
2
2
/n ∈ Z .
sint cost = cos t − sin t = cos2t ̸= 0, ∀ t ∈ R −
4
Therefore,
} are linearly independent for any scalar
{ the given vectors
/n ∈ Z .
t ∈ R − (2n−1)π
4
Definition 1.6. Linearly dependence and independence of functions
If f1 (x), f2 (x) · · · fn (x) are (n − 1) times differentiable functions on the interval
CHAPTER 1.
VECTOR SPACES
29
(−∞, ∞) then the determinant
f1 (x)
f
(x)
·
·
·
f
(x)
2
3
′
′
f ′ (x)
f2 (x) · · ·
f3 (x) 1
W (x) = ..
..
..
..
.
.
.
.
(n−1)
(n−1)
(n−1)
f1
(x) f2
(x) · · · f3
(x)
is called the Wronskian of f1 , f2 · · · fn .
The set of all those functions is linearly independent if and only if W (x) ̸= 0 for
some x ∈ (−∞, ∞).
Example 1.41. Show that f1 = 1, f2 = ex , f3 = e2x form a linearly independent
set of vectors.
Solution: Consider the Wronskian
f1 f2 f3 1 ex e2x W (x) = f1′ f2′ f3′ = 0 ex 2e2x = 4e3x − 2e3x = 2e3x ̸= 0
f ′′ f ′′ f ′′ 0 ex 4e2x 1
2
3
∴ {f1 , f2 , f3 } forms a linearly independent set.
Example 1.42. In the vector space of continuous functions, consider the vectors
f1 (x) = sin(x)cos(x) and f2 (x) = sin(2x). The set {f1 (x), f2 (x)} is linearly dependent because f2 (x) = 2f1 (x).
MATLAB:
x = [−π : 0.001 : π]
f1 = sin(x) ∗ cos(x);
f2 = sin(2 ∗ x);
plot(x, f1 , x, f2 )
CHAPTER 1.
VECTOR SPACES
30
Example 1.43. In the vector space C(R), of continuous functions over R, the
vectors f1 (x) = sinx and f2 (x) = cosx are linearly independent because f2 (x) ̸=
cf1 (x) for any real c.
1.5
Basis and Dimension
Definition 1.7. Basis
A non-empty subset B of a vector space V is said to be basis of V if
(i) L(B) = V
(ii) B is linearly independent set.
Note: A vector space V is n− dimensional vector space if number of basis vectors
in B is n.
Example 1.44. Show that B = {e1 , e2 , · · · , en } is a basis for Euclidean n-dimensional
space Rn , where e1 = (1, 0, · · · , 0), e2 = (0, 1, · · · , 0), · · · , en = (0, 0, · · · , n).
CHAPTER 1.
VECTOR SPACES
31
Solution: (i) To show that B is linearly independent.
For this, we have
a1 e1 + a2 e2 + · · · + an en = 0̄
⇒ (a1 , a2 , · · · , an ) = (0, 0, · · · , 0)
∴ a1 = a2 = · · · = an = 0
Therefore, B is linearly independent.
(ii) For any u = (u1 , u2 , · · · , un ) ∈ Rn , is written as
u = u1 e1 + u2 e2 + · · · + un en
∴ L(B) = Rn .
From (i) and (ii), we prove B = {e1 , e2 , · · · , en } is a basis for Rn .
This basis is called as natural or standard basis for Rn .
Example 1.45. Let C be a real vector space. Show that B = {2 + 3i, 5 − 6i} is a
basis for C.
Solution: (i) To show that B is linearly independent.
For this, we have
a(2 + 3i) + b(5 − 6i) = 0̄ = 0 + i0
⇒ 2a + 5b = 0
3a − 6b = 0
Solving this system of equations, we get
a = b = 0.
Therefore, B is linearly independent.
(ii) For any u = x + iy = (x, y) ∈ C, is written as
x + iy = a(2 + 3i) + b(5 − 6i)
⇒ 2a + 5b = x
3a − 6b = y
CHAPTER 1.
VECTOR SPACES
32
This system of equations can be written in matrix form as follows
(
)( ) ( )
2 5
a
x
=
3 −6
b
y
The coefficient matrix of this system is,
(
)
2 5
A=
3 −6
⇒ |A| = −27 ̸= 0, hence A−1 exist. Therefore, from above matrix form of system,
we get
( )
( )
a
x
−1
=A
y
b
(
)( )
1
−6 −5
x
=−
y
27 −3 2
( )
(
)
1
a
−6x − 5y
∴
=−
b
27 −3x + 2y
1
1
∴ a = (6x + 5y), b = (3x − 2y) ∈ R are such that
27
27
x + iy = a(2 + 3i) + b(5 − 6i), ∀ x + iy ∈ C.
∴ L(B) = C.
From (i) and (ii), we prove B is a basis for real vector space C.
Example 1.46. Let C be a real vector space. Show that B = {1, i} is a basis.
Example 1.47. Show that the set S = {(1, 1, 2), (1, 2, 5), (5, 3, 4)} do not form basis
for R3 .
Solution: For is linearly independence, we consider
a(1, 1, 2) + b(1, 3, 5)+c(5, 3, 4) = 0̄
⇒ a + b + 5c = 0
a + 2b + 3c = 0
2a + 5b + 4c = 0.
The coefficient matrix of this homogeneous system is


1 1 5
1 2 3
2 5 4
CHAPTER 1.
VECTOR SPACES
33
Using row transformations the row echelon form of above matrix is


1 1 5
 0 1 −2 
0 0 0
Therefor the reduced system of equations is
a + b + 5c = 0
b − 2c = 0
Put c = 1 then b = 2 and a = −7.
∴ −7(1, 1, 2) + 2(1, 3, 5) + 1(5, 3, 4) = 0̄.
⇒ S = {(1, 1, 2), (1, 2, 5), (5, 3, 4)} is linearly dependent set.
Hence, S is not basis for R3 .
Example 1.48. Show that B = {p1 , p2 , p3 } where p1 = 1 + 2x + x2 , p2 = 2 + x,
p3 = 1 − x + 2x2 is a basis for P2 .
Solution:(i) For linear independence, we have
ap1 + bp2 + cp3 = 0̄
a(1 + 2x + x2 ) + b(2 + x)+c(1 − x + 2x2 ) = 0̄
⇒ a + 2b + c = 0
2a + b − c = 0
a + 2c = 0.
This is written in the following matrix form

   
1 2 1
a
0
 2 1 −1   b  =  0 
1 0 2
c
0
The coefficient matrix of this homogeneous system is


1 2 1
A =  2 1 −1 
1 0 2
(1.9)
CHAPTER 1.
VECTOR SPACES
34
⇒ |A| = −9 ̸= 0.
Therefore, the above system has only trivial solution as follows
a = b = c = 0.
Therefore, set B is linearly independent.
(ii) Suppose
a0 + a1 x + a2 x2 = ap1 + bp2 + cp3
 


0
a0
Then replacing  0  by  a1  in (1.9), we get
0
a2


  
1 2 1
a
a0
 2 1 −1   b  =  a1 
a2
1 0 2
c


 


a0
a
a0
1
∴  b  = A−1  a1  =
adj.A  a1 
|A|
a2
c
a2
where


1 2 1
A =  2 1 −1 
1 0 2

 


a
2 −4 −3
a0
 b  = − 1  −5 1 3   a1 
9
a2
c
−1 2 −3
1
∴ a = − (2a0 − 4a1 − 3a2 ),
9
1
b = − (−5a0 + a1 + 3a2 ),
9
1
c = − (−a0 + 2a1 − 3a2 ) ∈ R
9
such that
a0 + a1 x + a2 x2 = ap1 + bp2 + cp3 .
∴ L(B) = P2 .
Hence, from (i) and (ii), we prove B is basis for P2 .
CHAPTER 1.
VECTOR SPACES
35
Theorem 1.5. If B = {v1 , v2 , · · · , vn } is a basis for the vector space V , then any
vector v ∈ V is uniquely expressed as linear combination of the basis vector.
Proof. Suppose v =
n
∑
n
∑
αi vi =
i=1
βi vi for some αi′ s , βi′ s ∈ F .
i=1
Then
n
∑
αi vi −
i=1
n
∑
βi vi = 0̄
i=1
n
∑
∴
(αi − βi )vi = 0̄
i=1
As B is a basis and hence linearly independent set
∴ αi − βi = 0, ∀ i = 1, 2, · · · , n.
∴ αi = βi , ∀ i = 1, 2, · · · , n.
Hence, any vector v ∈ V is uniquely expressed as the linear combination of basis
vectors.
Definition 1.8. Co-ordinate Vector
Let B = {v1 , v2 , · · · , vn } a basis of n− dimensional vector space V and v ∈ V is
n
∑
such that v =
αi vi then
i=1
(v)B = (α1 , α2 , · · · , αn )
is called as co-ordinate vector of v related to the basis B.
Example 1.49. Let B = {(1, 0), (1, 2)} be a basis of R2 and (x)B = (−2, 3), then
( ) ( )
( )
1
1
1
=
+3
x = −2b1 + 3b2 = −2
6
2
0
In fact (1, 6) are the coordinates of x in the standard basis {e1 , e2 }
(
x = 1e1 + 6e2 = 1
1
0
)
(
+6
0
1
)
(
=
1
6
)
Example 1.50. Coordinates in R2 :
If we have a point x in R2 we can easily find its coordinates in any basis, as follows.
CHAPTER 1.
VECTOR SPACES
36
Let x = (4, 5) and the basis B = {(2, 1), (−1, 1)}. We need to find c1 and c2 such
that
x = c1 b1 + c2 b2
( )
( )
(
) (
)( )
4
2
−1
2 −1
c1
⇒
= c1
+ c2
=
5
1
1
1 1
c2
⇒ 2c1 − c2 = 4
c1 + c2 = 5.
Solving these equations, we get
c1 = 3, c2 = 2.
∴ (x)B = (3, 2).
Example 1.51. Find co-ordinate vectors of (1, 0, 0), (0, 1, 0), (0, 0, 1) and (1, 1, −1)
with respect to basis B = {(1, 2, 1), (2, 1, 0), (1, −1, 2)} of Euclidean space R3 .
Solution: Let v = (x, y, z) ∈ R3 can be written as
(x, y, z) = a(1, 2, 1) + b(2, 1, 0) + c(1, −1, 2)
⇒ a + 2b + c = x
2a + b − c = y
a + 2c = z.
Solving this system, we get
1
a = − (2x − 4y − 3z),
9
1
b = − (−5x + y + 3z),
9
1
c = − (−x + 2y − 3z)
9
The co-ordinate vectors of general and given vectors to the relative basis are as
CHAPTER 1.
VECTOR SPACES
37
follows
)
(
1
∴ (v)B = (a, b, c) = − (2x − 4y − 3z), (−5x + y + 3z), (−x + 2y − 3z)
9
1
∴ (1, 0, 0)B = (−2, 5, 1)
9
1
(0, 1, 0)B = (4, −1, −2)
9
1
(0, 0, 1)B = (1, −1, 1)
3
1
(1, 1, −1)B = (−1, 7, −4).
9
{(
) (
) (
) (
)}
1 0
1 1
1 1
1 1
Example 1.52. Let B =
,
,
,
be basis for
0 0
0 0
1 0
1 1
a vector space M2 (R). Find co-ordinate vectors of matrices
(
) (
) (
)
2 −3
−1 0
2 1
,
,
.
−1 0
0 1
5 −1
(
)
(
)
(
)
(
)
(
)
x y
1 0
1 1
1 1
1 1
Solution: Let
=a
+b
+c
+d
.
z w
0 0
0 0
1 0
1 1
This gives the following system of equations
a+b+c+d=x
b+c+d=y
c+d=z
d=w
Solving this system, we get
a = x − y, b = y − z, c = z − w, d = w.
The co-ordinate vectors of general and given vectors to the relative basis are as
CHAPTER 1.
follows
VECTOR SPACES
[(
∴
[(
x y
z w
38
)]
= (a, b, c, d) = (x − y, y − z, z − w, w)
) ]B
2 −3
= (5, −2, −1, 0)
−1 0
[(
) ]B
−1 3
= (−1, 0, −1, 1)
0 1
[(
) ]B
2 1
= (1, −4, 6, −1).
5 −1
B
Definition 1.9. Dimension of a Vector Space
If the vector space V is spanned by a finite set of vectors, then V is finite-dimensional
and its dimension (dimV ) is the number of elements of any of its bases.
The dimension of V = {0̄} is 0.
If V is not generated by a finite set of vectors, then it is infinite-dimensional.
Example 1.53. (i) dim{Rn } = n.
(ii) dim{P2 } = 3 because one of its bases is a natural basis {1, t, t2 }.
(iii) dim{Pn } = n + 1 because one of its bases is a natural basis {1, t, t2 , · · · , tn }.
(iv) dim{P} = ∞
(v) dim{Span{v1 , v2 }} = 2 if v1 and v2 are linearly independent vectors.
Example 1.54. Examples in R3
(i) There is a single subspace of dimension 0 ({0̄}).
(ii) There are infinite subspaces of dimension 1 (all lines going through the origin).
(iii) There are infinite subspaces of dimension 2 (all planes going through the origin).
(iv) There is a single subspace of dimension 3 (R3 ).
CHAPTER 1.
VECTOR SPACES
39
Subspaces of R3 .
Theorem 1.6. If W ⊆ V is a vector subspace of a vector space V then,
dim{W } ≤ dim{V }.
Theorem 1.7. Let V be a n-dimensional vector space (n ≥ 1). Then
(i) Any linearly independent subset of V with n elements is a basis.
(ii) Any subset of V with n elements which spans V is a basis.
Proof. : (i) Suppose S = {v1 , v2 , · · · , vn } is a linearly independent set in V .
To prove S is basis of V , it is sufficient to prove L(S) = V .
If v = vi ∈ S, 1 ≤ i ≤ n then v ∈ L(S).
If v ∈ V −S then S ′ = S ∪{v} then S ′ contain n+1 vectors greater than dim.V = n.
Therefore, S ′ is linearly dependent set.
∴
n
∑
αi vi + αv = 0̄
i=1
⇒ atleast one αi or α = 0
If α = 0 then,
n
∑
αi vi = 0̄ ⇒ αi = 0, ∀ i as S is linearly independent.
i=1
Therefore, S ′ is linearly independent set. This is contradiction to S ′ is linearly dependent.
∴ α ̸= 0.
CHAPTER 1.
VECTOR SPACES
40
αv = −
v=
n (
∑
i=1
n
∑
αi vi
i=1
)
αi
−
vi .
α
Thus, any vector in V is expressed as linear combination of vectors in S.
Therefore, L(S) = V .
(ii) Let S be subset of a n- dimensional vector space V containing n vectors such
that L(S) = V .
To prove S is a basis of V , it is sufficient to prove that S is linearly independent.
On contrary, suppose S is linearly dependent set then any linearly independent set
in V contains number of vectors less than n.
Therefore, dim.V < n, contradicts to hypothesis.
Hence, S is linearly independent.
Example 1.55. Let v1 = (3, 0, −6), v2 = (−4, 1, 7), v3 = (−2, 1, 5).
Is B = {v1 , v2 , v3 } a basis of R3 ?
Solution: To check linear independence, consider
c1 v1 + c2 v2 + c3 v3 = 0̄
From this we get the following system of equations
3c1 − 4c2 − 2c3 = 0
c2 + c3 = 0
−6c1 + 7c2 + 5c3 = 0
The coefficient matrix of this homogeneous system is


3 −4 −2
A= 0 1 1 
−6 7 5
⇒ |A| = 6 ⇒ A−1 exists.
Therefore, above system has only trivial solution i.e. c1 = 0 = c2 = c3 .
Hence, set {v1 , v2 , v3 } is linearly independent such that n(B) = 3 = dim(R3 ).
Therefore, B is a basis of R3 .
CHAPTER 1.
VECTOR SPACES
41
Example 1.56. (i) Let B = {(1, 0, 0), (2, 3, 0)} is a set of 2 linearly independent
vectors. But it cannot span R3 because for this we need 3 vectors. so, it can
not be a basis.
(ii) Let B = {(1, 0, 0), (2, 3, 0), (4, 5, 6)} is a set of 3 linearly independent vectors
that spans R3 , so it is a basis of R3 .
(iii) Let B = {(1, 0, 0), (2, 3, 0), (4, 5, 6), (7, 8, 9)} is a set of 4 linearly dependent
vectors that spans R3 , so it cannot be a basis as n(B) = 4 > 3 = dim.R3 .
Theorem 1.8. Any two bases of finite dimensional vector space V has same number
of elements.
Proof. Let B = {v1 , v2 , · · · , vn } and B ′ = {u1 , u2 , · · · , um } be the bases of the vector
space V .
Then B and B ′ are linearly independent sets.
Now, if B is basis and B ′ is linearly independent set then
No. of elements in B ′ ≤ No. of elements in B
(1.10)
While if B ′ is basis and B is linearly independent set then
No. of elements in B ≤ No. of elements in B ′
(1.11)
From equations (1.10) − (1.11), we get
No. of elements in B = No. of elements in B ′
Thus, any two bases of finite dimensional vector space V has same number of elements, is proved.
Note: From definition of linear dependence set and basis, we have
(i) If B is a basis for n dimensional vector space V and S ⊆ V such that
No. of elements in S > No. of elements in V then S is linearly dependent.
(ii) If S is linearly independent set then
No. of elements in S ≤ No. of elements in V .
Theorem 1.9. A subset S in a vector space V containing two or more vectors
is linearly dependent if and only if atleast one vector is expressible as the linear
combination of remaining.
CHAPTER 1.
VECTOR SPACES
42
Proof. Suppose S = {v1 , v2 , · · · , vr } is linearly dependent set in a vector space V .
Therefore, there exists some non-zero αi′ s such that
n
∑
αi vi = 0̄
i=1
If αr ̸= 0 then
αr vr =
∴ vr =
n
∑
i=1,i̸=r
n
∑
i=1,i̸=r
−αi vi
(−
αi
)vi .
αr
Hence, atleast one vector is expressed as the linear combination of others.
Conversely, suppose vr ∈ S linear combination of others then
vr =
n
∑
αi vi
i=1,i̸=r
n
∑
αi vi + (−1)vr = 0̄, where αr = −1 ̸= 0
i=1,i̸=r
∴ S is linearly dependent.
Theorem 1.10. A nonempty subset S of the vector space V is linearly independent
if and only if no vector in S is expressible as the linear combination of other vectors
of S.
Theorem 1.11. Let S = {v1 , v2 , · · · , vr } be the set of vectors in Rn . If r > n then
S is linearly dependent.
Proof. Suppose
v1 = (v11 , v12 , · · · , v1n )
v2 = (v21 , v22 , · · · , v2n )
..
.
vr = (vr1 , vr2 , · · · , vrn )
CHAPTER 1.
VECTOR SPACES
43
Consider the equation
k1 v1 + k2 v2 + · · · + kr vr = 0̄
k1 (v11 , v12 , · · · , v1n ) + k2 (v21 , v22 , · · · v2n ) + · · ·
+kr (vr1 , vr2 , · · · , vrn ) = 0̄
Comparing both sides, we get
k1 v11 + k2 v21 + · · · + kr vr1 = 0
k1 v12 + k2 v22 + · · · + kr vr2 = 0
..
.
k1 v1n + k2 v2n + · · · + kr vrn = 0
This is homogeneous system of n−linear equations in r−unknowns with r > n.
Therefore, it has atleast (r − n) free variables and hence the system has nontrivial
solution.
Therefore, S = {v1 , v2 , · · · vr } is linearly dependent set.
Theorem 1.12. Let V be n dimensional vector space and S = {v1 , v2 , · · · , vr } be
linearly independent set in V then S can be extended to a basis
S ′ = {v1 , v2 , · · · vr , vr+1 , · · · , vn } of V.
Proof. : If r = n then S itself is a basis of V .
If r < n then choose vr+1 ∈ V − S such that
S1 = {v1 , v2 , · · · , vr , vr+1 }
is linearly independent set in V .
If r + 1 = n then S1 = S ′ is basis of V .
If r + 1 < n then add vr+2 in S1 so that
S2 = {v1 , v2 , · · · , vr , vr+1 , vr+2 }
is linearly independent set in V .
If r + 2 = n then S2 = S ′ is basis of V .
If r + 2 < n then continue this process till S is extended to a basis
S ′ = {v1 , v2 , · · · , vr , vr+1 , · · · , vn } of V.
CHAPTER 1.
VECTOR SPACES
44
Exercise: 1.3
(i) Determine which of following are linear combinations of u = (0, −1, 2) and
v = (1, 3, −1) ?
(a) (2, 2, 2)
(b) (3, 1, 5)
c) (0, 4, 5)
(d) (0, 0, 0).
(ii) Express the following as linear combinations of u = (2, 1, 4) , v = (1, −1, 3) and
w = (3, 2, 5).
(a) (−9, −7, −15)
(b) (6, 11, 6)
(c) (7, 8, 9)
(d) (0, 0, 0).
(iii) Express the following as linear combinations of p1 = 2+x+4x2 , p2 = 1−x+3x2
and p3 = 3 + 2x + 5x2 .
(a) −9 − 7x − 15x2
(b) 6 + 11x + 6x2
(c) 7 + 8x + 9x2
(d) 0.
(iv) Determine which of following are linear combinations of matrices?
(
)
(
)
(
)
4 0
1 −1
0 2
A=
, B=
, C=
−2 −2
2 3
1 4
(
)
(
)
(
)
(
)
6 −8
0 0
6 0
−1 5
(a)
, (b)
, (c)
(d)
.
−1 −8
0 0
3 8
7 1
(v) Determine whether the given sets span R3 ?
(a) S = {(2, 2, 2), (0, 0, 3), (0, 1, 1)}
(b) S = {(2, −1, 3), (4, 1, 2), (8, −1, 8)}
(c) S = {(3, 1, 4), (2, −3, 5), (5, −2, 9), (1, 4, −1)}
(d) S = {(1, 2, 6), (3, 4, 1), (4, 3, 1), (3, 3, 1)}
(vi) Determine whether the polynomials 1−x+2x2 , 3+x, 5−x+4x2 , −2−2x+2x2
span P2 ?
(vii) Let S = {(2, 1, 0, 3), (3, −1, 5, 2), (−1, 0, 2, 1)}. Which of the following vectors
are in linear span of S ?
(a)(2, 3, −7, 3)
(b)(1, 1, 1, 1)
(c)(−4, 6, −13, 4).
(viii) By inspection, explain why the following are linearly dependent sets of vectors?
(a) S = {(−1, 2, 4), (5, −10, −20)} in R3
CHAPTER 1.
VECTOR SPACES
45
(b) S = {p1 = 1 − 2x + 4x2 , p2 = −3 + 6x − 12x2 } in P2
(c) S = {(3, −1), (4, 5), (−2, 9)} in R2
{
(
)
(
)}
4 0
−4 0
, B=
in M22 .
(d) S = A =
−2 −2
2 2
(ix) Determine which of the following sets are linearly dependent in R3 ?
(a) S = {(2, 2, 2), (0, 0, 3), (0, 1, 1)}
(b) S = {(8, −1, 3), (4, 0, 1)}
(c) S = {(3, 1, 4), (2, −3, 5), (5, −2, 9)}
(d) S = {(1, 2, 6), (3, 4, 1), (4, 3, 1), (3, 3, 1)}.
(x) Show that the vectors v1 = (0, 3, 1, −1), v2 = (6, 0, 5, 1), v3 = (4, −7, 1, 3) form
a linearly dependent set in R4 . Express each vector as a linear combination of
remaining two.
(xi) For which values of λ do the following set vectors form a linearly dependent set
in R3 ?
v1 = (λ, −1/2, −1/2), v2 = (−1/2, λ, −1/2), v3 = (−1/2, −1/2, λ).
(xii) Show that if S = {v1 , v2 , ..., vr } is linearly dependent set of vectors, then so is
every nonempty subset of S.
(xiii) Show that if S = {v1 , v2 , ..., vr } is linearly independent set of vectors, then
S = {v1 , v2 , ..., vr , vr+1 , ..., vn } is also linearly dependent set.
(xiv) Prove: For any vectors u, v and w , the vectors u − v, v − w and w − u forms
a linearly dependent set.
(xv) By inspection, explain why the following set of vectors are not bases for the
indicated vector spaces?
(a) S = {(−1, 2, 4), (5, −10, −20), (1, 0, 2), (1, 2, 3)} forR3
(b) S = {(p1 = 1 − 2x + 4x2 , p2 = −3 + 6x − 12x2 } for P2
(c) S = {(3, −1), (4, 5), (−2, 9)} for R2
CHAPTER 1.
VECTOR SPACES
{
(d) S =
(
A=
4 0
−2 −2
46
)
(
, B=
−4 0
2 2
)}
for M22
e) S = {(1, 2, 3), (−8, 2, 4), (2, 4, 6)} for R3 .
(xvi) Determine which of the following sets are bases for R3 ?
(a) S = {(2, 2, 2), (0, 0, 3), (0, 1, 1)}
(b) S = {(8, −1, 3), (4, 0, 1) (12, −1, 4)}
(c) S = {(3, 1, 4), (2, −3, 5), (5, −2, 9)}
(d) S = {(1, 2, 6), (3, 4, 1), (4, 3, 1)}
(xvii) Determine which of the following sets are bases for P2 ?
(a) S = {1 − 3x + 2x2 , 1 + x + 4x2 , 1 − 7x}
(b) S = {4 + 6x + x2 , −1 + 4x + 2x2 5 + 2x − x2 }
(c) S = {1, 1 + x, 1 + x − x2 }
(d) S = {−4 + x + 3x2 , 6 + 5x + 2x2 , 8 + 4x + x2 }
(xviii) Show that the following set of vectors is a basis for M22 .
(
)
(
)
(
)
(
)
3 6
0 −1
0 −8
1 0
A=
, B=
, C=
D=
3 −6
−1 0
−12 −4
−1 2
(xix) Let V be space spanned by the set S = { v1 = cos2 x, v2 = sin2 x, v3 = cos2x}.
Show that S is not the basis of V .
(xx) Find the coordinate vector of w relative the basis B = {v1 , v2 } for R2 .
(a) v1 = (1, 0), v2 = (0, 1); w = (3, −7)
(b) v1 = (2, −4), v2 = (3, 8); w = (1, 1)
(c) v1 = (1, 1), v2 = (0, 2); w = (a, b)
(xxi) Find the coordinate vector of w relative the basis B = {v1 , v2 , v3 } for R3 .
(a) v1 = (1, 0, 0), v2 = (1, 1, 0), v3 = (1, 1, 1); w = (2, −1, 3)
(b) v1 = (1, 0, 0), v2 = (0, 1, 0), v3 = (0, 0, 1); w = (a, b, c)
(c) v1 = (1, 2, 3), v2 = (−4, 5, 6), v3 = (7, −8, 9); w = (5, −12, 3)
CHAPTER 1.
VECTOR SPACES
47
(xxii) Find the coordinate vector of p relative the basis B = {p1 , p2 , p3 } for P2 .
(a) p1 = 1, p2 = x, v3 = x2 ; p = 2 − 1x + 3x2
(b) p1 = 1 + x, p2 = 1 + x2 , p3 = x + x2 ; p = 2 − x + x2
(xxiii) Find the coordinate vector of A relative the basis B = {A1 , A2 , A3 , A4 } for
M22 where
(
)
(
)
(
)
2 0
−1 1
1 1
A=
, A1 =
, A2 =
,
−1 3
0 0
0 0
(
)
(
)
0 0
0 0
A3 =
, A4 =
.
1 0
0 1
(xxiv) If B = {v1 , v2 , v3 } is a basis for vector space V then show that
B ′ = {v1 , v1 + v2 , v1 + v2 + v3 } is also a basis of V .
(xxv) Determine basis and dimension of the each of following subspaces of R3 ?
(a) S = {(x, y, z)/3x − 2y + 5z = 0}
(b) S = {(x, y, z)/x − y = 0}
(c) S = {(x, y, z)/z = x − y, y = 2x + 3z}
(xxvi) Determine basis and dimension of the each of following subspaces of R4 ?
(a) S = {(x, y, z, w)/w = 0}
(b) S = {(x, y, z, w)/w = x + y, z = x − y}
(c) S = {(x, y, z, w)/x = y = z = w}
(xxvii) Determine basis and dimension of subspace of P3 consisting of all polynomials
a0 + a1 x + a2 x2 + a3 x3 for which a0 = 0.
(xxviii) Find t, for which u = (eat , aeat ), v = (ebt , bebt ) form a linearly independent set
in R2 .
CHAPTER 1.
1.6
VECTOR SPACES
48
Vector Space as a Direct Sum of Subspaces
Definition 1.10. Sum
A vector space V is said to be sum of subspaces W1 , W2 , · · · , Wn of V and is written
as
V = W1 + W2 + ... + Wn
if every vector v in V has a representation of the form
v = w1 + w2 + ... + wn
for some w1 ∈ W1 , w2 ∈ W2 , · · · , wn ∈ Wn .
Definition 1.11. Direct Sum
A vector space is said to be direct sum of subspaces W1 , W2 , ..., Wn of V and is written
as
V = W1 ⊕ W2 ⊕ ... ⊕ Wn
if every vector v in V has the unique representation of the form
v = w1 + w2 + ... + wn
for unique w1 ∈ W1 , w2 ∈ W2 , ....wn ∈ Wn .
Example 1.57. Let V = R3 be a vector space and W1 = {(x, y, 0)/x, y ∈ R} and
W1 = {(0, 0, z)/z ∈ R} be subspaces of V then v = (x, y, z) ∈ V is uniquely expressed
as
v = (x, y, 0) + (0, 0, z) = w1 + w2
where w1 = (x, y, 0) ∈ W1 and w2 = (0, 0, z) ∈ W2 .
Therefore, V = W1 ⊕ W2 .
Example 1.58. Let V = R3 be a vector space and W1 = {(x, y, 0)/x, y ∈ R} and
W2 = {(0, y, z)/z ∈ R} be subspaces of V then v = (x, y, z) ∈ V is expressed as
v = (x, y, 0) + (0, 0, z) where (x, y, 0) ∈ W1 , (0, 0, z) ∈ W2
= (x, 0, 0) + (0, y, z) where (x, 0, 0) ∈ W1 , (0, y, z) ∈ W2
y
y
y
y
= (x, , 0) + (0, , z) where (x, , 0) ∈ W1 , (0, , z) ∈ W2
2
2
2
2
These are different representations of v.
Therefore,
V = W1 + W2 but V ̸= W1 ⊕ W2 .
CHAPTER 1.
VECTOR SPACES
49
Theorem 1.13. If W1 and W2 are the subspaces of a vector space V then
V = W1 ⊕ W2 if and only if (a) V = W1 + W2 and (b) W1 ∩ W2 = {0̄}.
Proof. Suppose V = W1 ⊕ W2 then v ∈ V is uniquely expressed as v = w1 + w2 for
w1 ∈ W1 and w2 ∈ W2 . Therefore, V = W1 + W2 . Moreover, if w ∈ W1 ∩ W2 then
w = w + 0̄ for w ∈ W1 , 0̄ ∈ W2
w = 0̄ + w for w ∈ W2 , 0̄ ∈ W1
Since, these expressions are unique, w = 0̄ and consequently
W1 ∩ W2 = {0̄}.
Conversely, suppose (a) and (b) hold then to show that V = W1 ⊕ W2 .
For this suppose v ∈ V have representations,
v = w1 + w2 = w1′ + w2′
for w1 , w1′ ∈ W1 and w2 , w2′ ∈ W1 . Then,
w1 − w1′ = w2 − w2′ ∈ W1 ∩ W2 = {0̄}
Therefore, w1 = w1′ and w2 = w2′ .
Thus any vector in v ∈ V has unique representation v = w1 + w2 for w1 ∈ W1 and
w2 ∈ W2 .
Therefore, V = W1 ⊕ W2 .
Theorem 1.14. If W1 , W2 , ..., Wn are the subspaces of a vector space V then
V = W1 ⊕ W2 ⊕ .... ⊕ Wn if and only if
(a) V = W1 + W2 + .... + Wn and
(∑ )
(b) Wi ∩
Wj = {0̄}, i = 1, 2, ..., n.
j̸=i
Corollary 1.1. If the vector space is the direct sum of its subspaces W1 , W2 , ..., Wn
i.e. V = W1 ⊕ W2 ⊕ ... ⊕ Wn then dim.V = dim.W1 + dim.W2 + ... + dim.Wn .
Exercise: 1.4
CHAPTER 1.
VECTOR SPACES
50
1. Let V = Mn (R) be real vector space of all n × n matrices. Then show that,
(a) W1 = {A + At /A ∈ V } is subspace of V .
(b) W2 = {A − At /A ∈ V } is subspace of V .
(c) V = W1 ⊕ W2 .
2. Let V = P100 be real vector space of all polynomials of degree less equal 100.
Show that,
(a) W1 = {a0 + a2 x2 + a4 x4 + .... + a100 x100 /a0 , a2 , ...., a100 ∈ R}
is subspace of P100
(b) W2 = {a1 x + a3 x3 + a5 x5 + .... + a99 x99 /a1 , a3 , ...., a99 ∈ R}
is subspace of P100
(c) V = W1 ⊕ W2 .
3. Let V = P100 be real vector space of all polynomials of degree less equal 100.
Show that,
(a) W1 = {a0 + a1 x + a3 x3 + .... + a60 x60 /a0 , a1 , ...., a60 ∈ R}
is subspace of P100
(b) W1 = {a41 x41 + a42 x42 + a43 x43 + .... + a100 x100 /a41 , a42 , ...., a100 ∈ R}
is subspace of P100
(c) V = W1 + W2 but V ̸= W1 ⊕ W2 .
4. If W1 and W2 are subspaces of the vector space V such that V = W1 + W2 then
show that, dim.V = dim.W1 + dim.W2 − dim(W1 ∩ W2 ).
5. If W1 and W2 are subspaces of the vector space V such that V = W1 ⊕ W2 then
show that, dim.V = dim.W1 + dim.W2 .
1.7
Null Space and Range Space
In this section, we introduce the concept of null space, range space and some results
on the rank of sum or product of two or more matrices.
CHAPTER 1.
VECTOR SPACES
51
Definition 1.12. Null Space
The null space of a matrix A ∈ Mmn is denoted by N ul{A} and it is defined as
{
}
n
N ul{A} = u ∈ R /Au = 0̄ .
Definition 1.13. Range Space
The range space of a matrix A ∈ Mmn is denoted by R{A} and it is defined as
{
}
m
n
R{A} = v ∈ R /Au = v, f or some u ∈ R .
Definition 1.14. For an m × n matrix A,

a11 a12
 a
 21 a22
A =  ..
..
 .
.
am1 am2
· · · a1n
· · · a2n
.
· · · ..
· · · amn





the vectors
r1 = [a11 , a12 , · · · , a1n ]
r2 = [a21 , a22 , · · · , a2n ]
..
.
rm = [am1 , am2 , · · · , amn ]
in Rn formed from the rows of A are called



a11
a12
 a 
 a
 21 
 22
c1 =  ..  , c2 =  ..
 . 
 .
am1
am2
the row vectors of A, and the vectors



a1n

 a 

 2n 
 , · · · , cn =  .. 

 . 
amn
CHAPTER 1.
VECTOR SPACES
52
in Rn formed from the columns of A are called the column vectors of A.
Definition 1.15. If A is an m × n matrix, then the subspace of Rn spanned by the
row vectors of A is called the row space of A, and the subspace of Rm spanned by
the column vectors of A is called the column space of A.
Definition 1.16. If A is an m × n matrix, then the dimension of null space of A is
called nullity of A and it is denoted by
N ull(A) or N ullity(A).
The dimension of row space of A called row-rank of A and dimension of column
space of A is called column-rank of A.
The dimension of range space of A is called rank of A and it is denoted by rank(A).
Note: If A is an m × n matrix, then the row-rank of A is equal to the columnrank of A. Therefore, common dimension of row space and column space of A is
called rank of A. Hence
rank(A) = dimension of row space of A
= dimension of column space of A
= dimension of range space of A.
N ullity(A) = dimension of null space of A
= dimension of column space of system Ax = 0̄.
Definition 1.17. If A and B be m × n matrices with real entries. If B can be
obtained by applying successively a number of elementary row operations on A, then
A is said to be row equivalent to B, written as A ∼R B.
Theorem 1.15. If A and B be m × n matrices. If A ∼R B then A and B have the
same row rank and column rank.
Theorem 1.16. If A is m × n row reduced echelon matrix with r non-zero rows,
then the row rank of A is r.
Theorem 1.17. If a matrix is in reduced row echelon form, then the column vectors
that contains leading 1′s form a basis for column space of the matrix.
CHAPTER 1.
VECTOR SPACES
53
Remark 1.3. From above theorem it is clear that
rank(A) = rank of reduced row echelon form of A
= number of non-zero rows in reduced row echelon form of A
= number of columns that contains leading 1′s
in reduced row echelon form of A.
Example 1.59. Find basis for the subspace of R3 spanned by the vectors (1, 2, −1),
(4, 1, 3), (5, 3, 2) and (2, 0, 2).
Solution: The subspace spanned by the vectors is the row space of the matrix


1 2 −1
4 1 3 

A=
5 3 2 
2 0 2
We shall reduce the matrix A to its row echelon form by elementary row transformations. After applying successive row transformations, we obtain the following row
echelon form of matrix A.


1 2 −1
 0 1 −1 

R=
0 0 0 
0 0 0
∴ Basis for row space of A = basis for space generated by given vectors
= {(1, 2, −1), (0, 1, −1)}.
Example 1.60. Find basis for column space

1 2 3
A=2 1 3
1 1 2
Solution: Let us obtain the reduce of the

1 2
A=2 1
1 1
of matrix

1 5
1 4
1 3
matrix A.

3 1 5
3 1 4
2 1 3
CHAPTER 1.
VECTOR SPACES
54
After applying successive elementary row
the following row reduced echelon form of

1 0
R=0 1
0 0
transformations on matrix A, we obtain
matrix A,

1 0 1
1 0 2
0 1 0
The columns that contain leading 1′s in R are
 
 
 
1
0
0
′
′
′
c1 =  0  , c2 =  1  , c4 =  0 
0
0
1
form a basis for column space of R.
Thus the corresponding vectors in A viz.
 
 
 
1
2
1





c1 = 2 , c2 = 1 , c4 =
1
1
1
1
form the basis for the column space of A.
Example 1.61. Determine basis for (a) range space and (b) null space of A given
by


1 2 3 1 5
A=2 1 3 1 4
1 1 2 1 3
Solution: (a) We have the basis for range space is the basis for column space of A.
From the above example, the vectors
 
 
 
1
2
1
c1 =  2  , c2 =  1  , c4 =  1 
1
1
1
form the basis for the range space of A.
∴ rank(A) = 3.
(b) The null space of A is the solution space of the homogeneous system AX = 0̄.
  

0
x
1


  
1 2 3 1 5 
 x2   0 
 2 1 3 1 4   x3  =  0 
  


x4   0 
1 1 2 1 3
0
x5
CHAPTER 1.
VECTOR SPACES
55
This can be written as
x1 + 2x2 + 3x3 + x4 + 5x5 = 0
2x1 + x2 + 3x3 + x4 + 4x5 = 0
2x1 + x2 + 2x3 + x4 + 3x5 = 0
To obtain the general solution of above system, we reduce the coefficient matrix A
to reduced echelon form and the reduced form is as follows


1 0 1 0 1
R =  0 1 1 0 2 .
0 0 0 1 0
Therefore, the reduced system of equations is
x1 + x3 + x5 = 0
x2 + x3 + 2x5 = 0
x4 = 0
This system has 3 equations and five unknowns, so assign 5 − 3 = 2 unknowns.
Let x5 = t, x3 = s, t, S ∈ R.
So, we obtain the solution set
x1 = −s − t
x2 = −s − 2t
x3 = s
x4 = 0
x5 = t
CHAPTER 1.
VECTOR SPACES
In matrix form






x1
x2
x3
x4
x5
56



−s − t
  −s − 2t 
 

=

s
 

 

0
t



−1
 −1 




 + t
= s
1



 0 

0
−1
−2
0
0
1






Hence, the vectors (−1, −1, 1, 0, 0) and (−1, −2, 0, 0, 1) form the basis for null space
of A.
∴ N ullity(A) = 2.
Theorem 1.18. Dimension Theorem for Matrices
Let A be m × n matrix, then rank(A) + nullity(A) = n.
Note: The dimension theorem rank(A) + nullity(A) = number of columns in A is
verified in above example.
Example 1.62. Find basis and dimension of row space of matrix A given by


1 2 −1 2
A=3 0 1 4
1 −1 1 1
Solution: We shall reduce matrix A to its row echelon form. Consider


1 2 −1 2
A=3 0 1 4
1 −1 1 1
Applying successive row transformations, we get


1 2 −1 2
 0 1 −2 1 
3 3
0 0 0 0
CHAPTER 1.
VECTOR SPACES
57
This is the row echelon form of matrix A.
The non-zero rows of this matrix are the vectors (1, 2, −1, 2), (0, 1, − 32 , 31 ).
These vectors form a basis for row space of A.
∴ dimension of row space of A = 2.
∴ rank(A) = 2.
Theorem 1.19. The rank of a matrix Am×n is r if and only if Dim{N ull(A)} is
n − r.
Theorem 1.20. If Am×n and Bn×p are two matrices then
rank(AB) ≤ min{rank(A), rank(B)}.
Proof. Let v be in Rp .
Then Bv = 0̄ implies ABv = 0̄.
Hence, N (B) ⊆ N (AB).
∴ dim N (B) ≤ N (AB).
∴ N ulity (B) ≤ N ullity(AB).
By dimension theorem, we have
number of columns in B − rank(B) ≤ number of columns in (AB) − rank(AB)
p − rank(B) ≤ p − rank(AB) (as AB is mxp matrix)
rank(AB) ≤ rank(B)
(1.12)
Similarly,
rank(B ′ A′ ) ≤ rank(A′ )
rank(AB)′ ≤ rank(A′ )
rank(AB) ≤ rank(B)
as a rank of matrix is equal to that of its transpose.
From equations (1.12) and (1.13), we get
rank(AB) ≤ min{rank(A), rank(B)}.
(1.13)
CHAPTER 1.
VECTOR SPACES
58
Corollary 1.2. If Pm×m and Bn×n are two nonsingular matrices then
rank(P A) = rank(A) and rank(AQ) = rank(A).
where A is m × n matrix.
Theorem 1.21. If A and B be two matrices of order m × n then
rank(A + B) ≤ rank(A) + rank(B).
Proof. Let rank(A) = r and rank(B) = p.
Let x1 , x2 , · · · , xr and y1 , y2 , · · · , yp be the number of independent columns of A and
B respectively.
The columns z1 , z2 , · · · , zn of A + B which span the vector space W (say), can be
expressed as linear combination of xis and yi′s .
Therefore, for all k = 1, 2, · · · , n is in W implies that zk also lies in the vector space
V spanned by xis and yi′s .
so, W is a subspace of V .
Hence,
dim(W ) ≤ dim(V ) ≤ r + p
because the number of independent vectors among xis and yi′s are less than or equal
to r + p.
rank(A + B) ≤ rank(A) + rank(B).
is proved.
Theorem 1.22. Let A and B be two matrices of order m × n and n × p respectively.
Prove that the number of linearly independent vectors v = Au satisfying Av = 0 is
rank(A) − rank(AB).
Theorem 1.23. Sylvester’s Inequality
If the matrices A and B are of order m × n and n × p respectively, then
rank(A) + rank(B) − n ≤ rank(AB) ≤ min{rank(A), rank(B)}.
Proof. We observe that n − rank(A) is dim N (A) that is the number of independent
vectors v satisfying Av = 0̄.
CHAPTER 1.
VECTOR SPACES
59
Again by theorem (1.22), rank(B)−rank(AB) is the number of independent vectors
v = Bu satisfying Av = 0 and forms a subspace of N (A).
Hence,
rank(B) − rank(AB) ≤ n − rank(A).
or
rank(A) + rank(B) − n ≤ rank(AB).
rank(A) + rank(B) − n ≤ min{rank(A), rank(B)} (by theorem (1.20)).
From above results, we get
rank(A) + rank(B) − n ≤ rank(AB) ≤ min{rank(A), rank(B)}.
Hence, proof is completed.
Example 1.63. Let P be a matrix of order m × n and Q be of order n × m such
that P Q = Im . Then rank(P ) = rank(Q) = m.
Solution: We shall prove this result by contradiction.
As P is a matrix of order m × n
∴ rank(P ) ≤ m.
Suppose, rank(P ) < m. Then by theorem (1.20), we have
m = rank(P Q) ≤ rank(P ) = m.
Which is contradiction.
∴ rank(P ) = m.
Similarly,
rank(Q) = m.
rank(P ) = rank(Q) = m is proved.
Exersise: 1.5
(i) Determine the basis and dimension
A is given by

1
(a)  2
1
of range space and null space of A, where

2 1 1
−3 7 9 
4 −2 −1
CHAPTER 1.
VECTOR SPACES
60

2 3 1 1
 1 −1 3 −1 

(b) 
 1 0 2 −1 
3 5 1 2

(ii) Find basis for row space of A, where A is given in above example.
(iii) Find the rank and nullity of the matrix and verify the values obtained satisfy
formula for the dimension theorem.


1 −1 3
(a)  5 −4 −4 
7 −6 2


1 4 5 2
(b)  2 1 3 0  .
−1 3 2 2
(iv) Find basis for subspace W of an Euclidean space R4 spanned by the set
{(1, 2, 2, −2), (2, 3, 2, −3), (7, 3, 4, −3), (10, 8, 8, −8)}.
(v) Find basis for subspace W of a vector space of polynomials P3 spanned by the
set
{x3 + x2 − 1, x3 + 2x2 + 3x, 2x3 + 3x2 + 3x − 1}.
Chapter 2
INNER PRODUCT SPACES
2.1
Introduction
In previous chapter we have studied abstract vector spaces. These are a generalisation of the geometric space R2 and R3 . But these have more structure than just
that of a vector space. In R2 and R3 we have the concepts of lengths and angles.
In those spaces we use the dot product for this purpose, but the dot product only
makes sense when we have components. In the absence of components we introduce
something called an inner product to play the role of the dot product. In this chapter we study the additional structures that a vector space over field of real vector
spaces have. 2.2
Definitions, Properties and Examples
Definition 2.1. Dot product of Rn
Suppose that u = (u1 , u2 , · · · , un ) and v = (v1 , v2 , · · · , vn ) are two vectors in Rn .
The Euclidean dot product of u and v defined by
u.v = u1 v1 + u2 v2 + · · · + vn vn .
The Euclidean norm of u is defined by
1
1
∥u∥ = (u.u) 2 = (u21 + u22 + · · · + u2n ) 2
and the Euclidean distance between u and v is defined by
√
d(u, v) = ∥u − v∥ = (u1 − v1 )2 + (u2 − v2 )2 + · · · + (un − vn )2
61
CHAPTER 2. INNER PRODUCT SPACES
62
With the dot product we have geometric concepts such as the length of a vector,
the angle between two vectors, orthogonality, etc. We shall push these concepts to
abstract vector spaces so that geometric concepts can be applied to describe abstract
vectors.
Definition 2.2. Inner Product Space
Let V be a real vector space then the function < >: V × V → R is called as real
inner product if it satisfy the following axioms for any u, v, w ∈ V and k ∈ R
1. Symmetry axiom:
2. Additivity axiom:
< u, v >=< v, u >
< u + v, w >=< u, v > + < v, w >
3. Homogeneity axiom: < ku, v >= k < u, v >
4. Positivity axiom : For any u ∈ V , < u, u >≥ 0 and < u, u >= 0 if and only if
u = 0̄.
A real vector space V with real inner product is called real inner product space and
it is denoted by (V, < >).
Theorem 2.1. If u, v and w are vectors in a real inner product space, and k is any
scalar, then
1. < 0̄, v >=< v, 0̄ >= 0
2. < u, v + w >=< u, v > + < u, w >
3. < u, kv >= k < u, v >
4. < u − v, w >=< u, v > − < v, w >
5. < u, v − w >=< u, v > − < u, w >
Proof. :
1. Consider
< 0̄, v > =< 0̄ + 0̄, v >
0+ < 0̄, v > =< 0̄, v > + < 0̄, v >
0 =< 0̄, v >
(property of 0̄)
(property of additivity)
CHAPTER 2. INNER PRODUCT SPACES
63
Consider
< v, 0̄ > =< v, 0̄ + 0̄ >
0+ < v, 0̄ > =< v, 0̄ > + < v, 0̄ >
(property of 0̄)
(property of additivity)
0 =< v, 0̄ >
2.
< u, v + w > =< v + w, u >
(property of symmetry)
=< v, u > + < w, u >
(property of additivity)
=< u, v > + < u, w >
(property of symmetry)
3.
< u, kv > =< kv, u >
(property of symmetry)
= k < v, u >
(property of homogeneity)
= k < u, v >
(property of symmetry)
4.
< u − v, w > =< u, w > + < −v, w >
=< u, w > − < v, w >
(property of additivity)
(property of homogeneity)
5.
< u, v − w > =< v − w, u >
(property of symmetry)
=< v, u > + < −w, u >
(property of additivity)
=< v, u > − < w, u >
(property of homogeneity)
=< u, v > − < u, w >
(property of symmetry)
Hence, proof is completed.
CHAPTER 2. INNER PRODUCT SPACES
64
Illustrations
Example 2.1. If u = (u1 , u2 , · · · , un ) and v = (v1 , v2 , · · · , vn ) are vectors in Rn ,
then we define the Euclidean inner product on Rn as
< u, v >= u.v = u1 v1 + u2 v2 + · · · + un vn . (Verify).
The vector space Rn with this special inner product (dot product) is called the Euclidean n-space, and the dot product is called the standard inner product on Rn .
Example 2.2. If u = (u1 , u2 , · · · , un ) and v = (v1 , v2 , · · · , vn ) are vectors in Rn
and w1 , w2 , · · · , wn are positive real numbers, then we define the weighted inner
product on Rn as
< u, v >= w1 u1 v1 + w2 u2 v2 + · · · + wn un vn . (Verify).
Example 2.3. Show that for the vectors u = (u1 , u2 ) and v = (v1 , v2 ) in R2 ,
< u, v >= 5u1 v1 − u1 v2 − u2 v1 + 10u2 v2
defines an inner product on R2 .
Solution: Let u = (u1 , u2 ), v = (v1 , v2 ) and w = (w1 , w2 ) are in R2 and k ∈ R.
Then,
(i) Symmetry axiom:
< u, v > = 5u1 v1 − u1 v2 − u2 v1 + 10u2 v2
= 5v1 u1 − v1 u2 − v2 u1 + 10v2 u2
=< v, u >
(ii) Additivity axiom:
< u + v, w > = 5(u1 + v1 )w1 − (u1 + v1 )w2 − (u2 + v2 )w1 + 10(u2 + v2 )w2
= 5u1 w1 + 5v1 w1 − u1 w2 − v1 w2 − u2 w1 − v2 w1 + 10u2 w2 + 10v2 w2
= (5u1 w1 − u1 w2 − u2 w1 + 10u2 w2 ) + (5v1 w1 − v1 w2 − v2 w1 + 10v2 w2 )
=< u, w > + < v, w >
CHAPTER 2. INNER PRODUCT SPACES
65
(iii) Homogeneity axiom:
< ku, v > = 5ku1 v1 − ku1 v2 − ku2 v1 + 10ku2 v2
= k(5u1 v1 − u1 v2 − u2 v1 + 10u2 v2 )
= k < u, v >
(iv) Positivity axiom:
< u, u > = 5u1 u1 − u1 u2 − u2 u1 + 10u2 u2
= 5u21 − 2u1 u2 + 10u22
= u21 − 2u1 u2 + u22 + 4u21 + 9u22
= (u1 − u2 )2 + 4u21 + 9u22 ≥ 0
Moreover,
< u, u >= 0
⇔ (u1 − u2 )2 + 4u21 + 9u22 = 0
⇔ (u1 − u2 )2 = 0, u21 = 0, u22 = 0
⇔ u1 = 0, u2 = 0
⇔ u = 0̄
Thus, all axioms of inner product are satisfied.
Therefore, < u, v >= 5u1 v1 − u1 v2 − u2 v1 + 10u2 v2 is an inner product on R2 .
Example 2.4. Show that for the vectors u = (u1 , u2 ) and v = (v1 , v2 ) in R2 ,
< u, v >= 4u1 v1 + u2 v1 + u1 v2 + 4u2 v2
defines an inner product on V = R2 .
Solution: Let u = (u1 , u2 ), v = (v1 , v2 ) and w = (w1 , w2 ) are in R2 and k ∈ R.Then,
(i) Symmetry axiom:
< u, v > = 4u1 v1 + u2 v1 + u1 v2 + 4u2 v2
= 4v1 u1 + v2 u1 + v1 u2 + 4v2 u2
=< v, u >
CHAPTER 2. INNER PRODUCT SPACES
66
(ii) Additivity axiom:
< u + v, w > = 4(u1 + v1 )w1 + (u1 + v1 )w2 + (u2 + v2 )w1 + 4(u2 + v2 )w2
= 4u1 w1 + 4v1 w1 + u1 w2 + v1 w2 + u2 w1 + v2 w1 + 4u2 w2 + 4v2 w2
= (4u1 w1 + u2 w1 + u1 w2 + 4u2 w2 ) + (4v1 w1 + v2 w1 + v1 w2 + 4v2 w2 )
=< u, w > + < v, w >
(iii) Homogeneity axiom:
< ku, v > = 4ku1 v1 + ku2 v1 + ku1 v2 + 4ku2 v2
= k(4u1 v1 + u2 v1 + u1 v2 + 4u2 v2 )
= k < u, v >
(iv) Positivity axiom:
< u, u > = 4u21 + 2u1 u2 + 4u22
= u21 + 2u1 u2 + u22 + 3(u21 + u22 )
= (u1 + u2 )2 + 3(u21 + u22 ) ≥ 0
Moreover,
< u, u >= 0
⇔ (u1 + u2 )2 + 3(u21 + u22 ) = 0
⇔ (u1 + u2 )2 = 0, 3(u21 + u22 ) = 0
⇔ u1 = u2 = 0
⇔ u = 0̄
Thus, all axioms of inner product are satisfied.
Therefore,< u, v >= 4u1 v1 + u2 v1 + u1 v2 + 4u2 v2 is an inner product on R2 .
Example 2.5. Show that for the vectors u = (u1 , u2 , u3 ) and v = (v1 , v2 , v3 ) in R3 ,
< u, v >= 5u1 v1 − 10u2 v2 + 3u3 v3
does not define inner product on R3 .
Solution: Here for nonzero vector u = (4, 2, 0) ∈ R3
< u, u > = 5u1 u1 − 10u2 u2 + 3u3 v3
= 5(42 ) − 10(22 ) + 3(02 )
=0
CHAPTER 2. INNER PRODUCT SPACES
67
Thus, positivity axiom of inner product fail.
Therefore,
< u, v >= 5u1 v1 − 10u2 v2 + 3u3 v3
is not inner product on R3 .
Example 2.6. If A = [aij ] and B = [bij ] are vectors (matrices) in Mm×n (R) (vector
space of m × n matrices ) , then we define the inner product on Mm×n (R) as
∑
< A, B >=
aij .bij (Verify).
Example 2.7. If p(x) = a0 + a1 x + a2 x2 + · · · + an xn and q(x) = b0 + b1 x + b2 x2 +
· · · + bn xn are vectors (polynomials) in Pn , real vector space of polynomials of degree
less equal n then we define the inner product on Pn as
< p, q >=
n
∑
ai .bi (Verify).
i=1
Example 2.8. If u, v ∈ Rn and A is n × n invertible real matrix, then show that
< u, v >= Aut .Av t define inner product on Rn .
Solution: Let u, v and w are in Rn and k ∈ R.Then,
(i) Symmetry axiom:
< u, v > = Aut .Av t
= Av t .Aut
=< v, u >
(ii) Additivity axiom:
< u + v, w > = A(u + v)t .Awt
= A(ut + v t ).Awt
= (Aut + Av t ).Awt
= Aut .Awt + Av t .Awt
=< u, w > + < v, w >
CHAPTER 2. INNER PRODUCT SPACES
68
(iii) Homogeneity axiom:
< kf, g > = A(ku)t .Av t
= Akut .Av t
= k(Aut .Av t )
= k < u, v >
(iv) Positivity axiom:
< u, u > = Aut .Aut ≥ 0
Moreover,
< u, u >= 0
⇔ Aut .Aut = 0
⇔ Aut = 0
⇔ ut = 0̄
(∵ A is invertible matrix)
⇔ u = 0̄.
Thus, all axioms of inner product are satisfied.
Therefore, < u, v >= Aut .Av t define inner product on Rn .
This inner product on Rn is called as inner product generated by invertible
matrix A.
Remark 2.1. (i) The identity matrix In generates Euclidean inner product on Rn .
(ii) For positive weights w1 , w2 · · · wn , the matrix
√
w1 0
0 ···0
 0 √w 0 · · · 0
1

A =  ..
..
..
...
 .
.
.
√
0 0··· 0
w1





generates weighted inner product
< ū, v̄ >=
n
∑
i=1
wi ui vi on Rn .
CHAPTER 2. INNER PRODUCT SPACES
69
Example 2.9. Let C[a, b] be the vector space of real valued continuous functions
defined on [a, b]. For f, g ∈ C[a, b], show that
∫
b
< f, g >=
f (x)g(x)dx
a
defines an inner product on C[a, b].
Solution: Let f , g and h are in C[a, b] and k ∈ R.Then,
(i) Symmetry axiom:
∫ b
< f, g > =
f (x)g(x)dx
a
∫ b
=
g(x)f (x)dx
a
=< g, f >
(ii) Additivity axiom:
∫
b
< f + g, h > =
[f (x) + g(x)]h(x)dx
a
∫
b
[f (x)h(x) + g(x)h(x)]dx
∫a b
∫ b
=
f (x)h(x)dx +
g(x)h(x)dx
=
a
a
=< f, h > + < g, h >
(iii) Homogeneity axiom:
∫
b
< kf, g > =
(kf )(x)g(x)dx
a
∫
=k
b
f (x)g(x)dx
a
= k < f, g >
CHAPTER 2. INNER PRODUCT SPACES
70
(iv) Positivity axiom:
∫
b
< f, f > =
f (x)f (x)dx
a
∫
b
[f (x)]2 dx ≥ 0
=
a
Moreover,
< f, f >= 0
∫ b
⇔
[f (x)]2 dx = 0
a
⇔ [f (x)]2 = 0, ∀x ∈ [a, b]
(∵ f ∈ C[a, b])
⇔ f (x) = 0, ∀x ∈ [a, b]
⇔ f = 0̄ (identically zero map on [a, b].)
Thus, all axioms of inner product are satisfied.
∫b
Therefore, < f, g >= a f (x)g(x)dx is an inner product on C[a, b].
Example 2.10. Find inner product on R2 if < e1 , e1 >= 2, < e2 , e2 >= 3, and
< e1 , e2 >= −1, where e1 = (1, 0) and e2 = (0, 1).
Solution: Let u = (u1 , u2 ), v = (v1 , v2 ) are in R2 .
Then
u = u1 e1 + u2 e2
v = v1 e1 + v2 e2
Using bilinearity, we obtain
< u, v > =< u1 e1 + u2 e2 , v1 e1 + v2 e2 >
= u1 < e1 , v1 e1 + v2 e2 > +u2 < e2 , v1 e1 + v2 e2 >
= u1 v1 < e1 , e1 > +u1 v2 < e1 , e2 > +u2 v1 < e2 , e1 > +u2 v2 < e2 , e2 >
= 2u1 v1 − u1 v2 − u2 v1 + 3u2 v2
First three axioms are obvious. It remains to check positivity axiom for u ∈ R2 .
< u, u > = 2u21 − 2u1 u2 + 3u22
= (u1 − u2 )2 + u21 + 2u22 ≥ 0.
CHAPTER 2. INNER PRODUCT SPACES
71
Now,
< u, u > = 2u21 − 2u1 u2 + 3u22
= (u1 − u2 )2 + u21 + 2u22 = 0
⇔ u1 = u2 = 0
⇔ u = 0̄.
Therefore, the required inner product is
< u, v >= 2u1 v1 − u1 v2 − u2 v1 + 3u2 v2 .
Note: Let Pn [a, b] be a real vector space of all polynomials of degree less equal to
n, then Pn [a, b] ⊂ C[a, b] as any polynomial is continuous function.
Therefore, for p(x), q(x) ∈ Pn [a, b]
∫ b
< p(x), q(x) >=
p(x)q(x)dx
a
defines inner product on Pn [a, b].
Exercise: 2.1
1. Let < u, v > be the Euclidean inner product on R2 , and u = (3, −2), v = (4, 5),
w = (−1, 6) and k = −4. Verify the following:
(a) < u, v >=< v, u >
(b) < u + v, w >=< u, v > + < u, w >
(c) < u, v + w >=< u, v > + < u, w >
(d) < ku, v >= k < u, v >=< u, kv >
(e) < 0̄, v >=< v, 0̄ >= 0.
2. Repeat Exercise 1 for inner product < u, v >= 4u1 v1 + 5u2 v2 .
3. Let < p, q >=
2
∑
i=0
ai bi be inner product on P2 defined for
p(x) = a0 + a1 x + a2 x2 , q(x) = b0 + b1 x + b2 x2 ∈ P2 , then compute < p, q > for
the following:
CHAPTER 2. INNER PRODUCT SPACES
72
(a) p(x) = −2 + x + 3x2 , q(x) = 4 − 7x2
(b) p(x) = −5 + 2x + x2 , q(x) = 3 + 2x − 4x2
4. Let < A, B >=
2
∑
aij bij be inner product on M2 (R) defined for
i,j=1
A = [aij ], B = [bij ] ∈ M2 (R), then compute < A, B > for the following:
(
)
(
)
3 −2
−1 3
(a) A =
,B =
4 8
1 1
(
)
(
)
1 2
4 6
(b) A =
,B =
.
−3 5
0 8
5. Let u = (u1 , u2 , u3 ), v = (v1 , v2 , v3 ) ∈ R3 . Determine which of following are
inner products on R3 . For those which are not inner products, list the axioms
those fail.
(a) < u, v >= u1 v1 + u2 v2
(b) < u, v >= u21 v12 + u22 v22 + u23 v32
(c) < u, v >= 5u1 v1 + 6u2 v2 + 3u3 v3
(d) < u, v >= u1 v1 − u2 v2 + u3 v3
6. Use the inner product
∫
1
< p, q >=
p(x)q(x)dx
−1
to compute < p, q > for the following vectors in P3 .
(a) p = 1 − x + x2 + 5x3 ,q = x − 3x2
(b) p = x − 5x3 ,q = 2 + 8x2 .
7. Use the inner product
∫
< f, g >=
1
f (x)g(x)dx
0
to compute < f, g > for the following vectors in C[0, 1].
(a) f = cos2πx, g = sin2πx
(b) f = x, g = ex .
CHAPTER 2. INNER PRODUCT SPACES
2.3
73
Length (Norm), Distance in Inner product space
Definition 2.3. Norm (Length)
Let V be an inner product space then the norm or length of a vector u ∈ V is denoted
by ∥u∥ and it is defined as
∥u∥ =< u, u >1/2 .
Definition 2.4. Distance
Let V be an inner product space then the distance between two vectors u, v ∈ V is
denoted by d(u, v) and it is defined as
d(u, v) = ∥u − v∥.
Theorem 2.2. The norm in an inner product space V satisfies the following properties for any u, v ∈ V and c ∈ R:
1. (N1 ): ∥u∥ ≥ 0, ∥u∥ = 0 if and only if u = 0
2. (N2 ): ∥cu∥ = |c|∥u∥
3. (N3 ): ∥u + v∥ ≤ ∥u∥ + ∥v∥.
Theorem 2.3. Parallelogram Identity
In inner product space V , for any u, v ∈ V
∥u + v∥2 + ∥u − v∥2 = 2∥u∥2 + 2∥v∥2
Proof.
∥u + v∥2 + ∥u − v∥2 =< u + v, u + v > + < u − v, u − v >
=< u, u > +2 < u, v > + < v, v > + < u, u >
− 2 < u, v > + < v, v >
= 2 < u, u > +2 < v, v >
= 2∥u∥2 + 2∥v∥2 .
Theorem 2.4. In inner product space V , for any u, v ∈ V
∥u + v∥2 − ∥u − v∥2 = 4 < u, v >
CHAPTER 2. INNER PRODUCT SPACES
74
Proof.
∥u + v∥2 − ∥u − v∥2 =< u + v, u + v > − < u − v, u − v >
=< u, u > +2 < u, v > + < v, v >
− (< u, u > −2 < u, v > + < v, v >)
= 4 < u, v > .
Example 2.11. Suppose u, v and w are vectors in inner product
such that < u, v >= 2, < v, w >= −3, < u, w >= 5, ∥ u ∥= 1, ∥ v ∥= 2,∥ w ∥= 7.
Evaluate the following expressions.
(a) < u + v, v + w >
(b) < 2u − w, 3u + 2w >
(c) < u − v − 2w, 4u + v >
(d) ∥ 2w − v ∥
(e) ∥ u − 2v + 4w ∥
Solution:
(a)
< u + v, v + w > =< u + v, v > + < u + v, w >
=< u, v > + < v, v > + < u, w > + < v, w >
= 2+ ∥ v ∥2 +5 + (−3)
= 4 + 22 = 8.
(b)
< 2u − w, 3u + 2w > = (2)(3) < u, u > +(2)(2) < u, w > +(−1)(3) < w, u >
+ (−1)(2) < w, w >
= 6 ∥ u ∥2 +4(5) − 3(5) − 2 ∥ w ∥2
= 6(12 ) + 5 − 2(72 )
= 87.
CHAPTER 2. INNER PRODUCT SPACES
75
(c)
< u − v − 2w, 4u + v > = 4 < u, u > +(−1) < v, u > +(−2) < w, u >
+ < u, v > +(−1) < v, v > +(−2) < w, v >
= 4 ∥ u ∥2 +(−1)(2) − 2(5) + 2− ∥ v ∥2 +(−2)(−3)
= 4(12 ) − 2 − 10 + 2 − (22 ) + 6
= −4.
(d)
∥ 2w − v ∥2 =< 2w − v, 2w − v >
= (2)(2) < w, w > +(2)(−1) < w, v > +
(−1)(2) < v, w > +(−1)(−1) < v, v >
= 4 ∥ w ∥2 −4 < v, w > + ∥ v ∥2
= 4(72 ) + 4(−3) + 22
= 196 − 12 + 4 = 188
√
∴∥ 2w − v ∥ = 188.
(e)
∥ u − 2v + 4w ∥2 =< u − 2v + 4w, u − 2v + 4w >
=∥ u ∥2 +(−4) < u, v > +8 < u, w > +4 ∥ v ∥2 +
(−16) < v, w > +16 ∥ w ∥2
= 12 − 4(2) + 8(5) + 4(22 ) − 16(−3) + 16(72 )
= 881
√
∴∥ u − 2v + 4w ∥ = 881.
2.4
Angle and Orthogonality in Inner product space
We know that angle θ between two nonzero vectors u, v ∈ R2 or R3 is given by
u.v = ∥u∥∥v∥cosθ
or alternatively,
cosθ =
u.v
∥u∥∥v∥
(2.1)
(2.2)
CHAPTER 2. INNER PRODUCT SPACES
76
From this we hope that the angle θ between two nonzero vectors u, v in general
inner product space V will be given by,
< u, v >
cosθ =
(2.3)
∥u∥∥v∥
Formula (2.3) is well defined only when
< u, v >
≤ 1 as |cosθ| ≤ 1.
∥u∥∥v∥
For this we prove Cauchy-Schwarz inequality in following theorem.
Theorem 2.5. Cauchy-Schwarz inequality
If u, v are vectors in real inner product space V then
< u, v >2 ≤ < u, u >< v, v > .
Equivalently | < u, v > | ≤ ∥u∥∥v∥.
(2.4)
Proof. Case i) If u or v = 0 then both sides of (2.4) are zeros and result hold.
Case ii) Assume that u ̸= 0. Let a =< u, v >, b = 2 < u, v >, c =< v, v > and let t
be any real number. Then by positivity axiom,
0 ≤ < tu + v, tu + v > = < u, u > t2 + 2 < u, v > t+ < v, v >
= at2 + bt + c = f (t)
f ′ (t) = 2at + b, f ′′ (t) = 2a = 2 < u, u >> 0
b
f ′ (t) = 0 ⇒t = −
is critical point of f (t).
2a
b
By second derivative test f (t) is minimum at t = − 2a
.
Therefore,
−b2 + 4ac
b
0 ≤ f (− ) =
2a
4a
2
∴ b − 4ac ≤ 0.
Substituting values of a, b and c, we get
4 < u, v >2 −4 < u, u >< v, v > ≤ 0
After simplification we get,
| < u, v > | ≤ ∥u∥∥v∥.
This completes the proof.
CHAPTER 2. INNER PRODUCT SPACES
77
Corollary 2.1. Triangle inequality
If u, v are any two vectors in a real inner product space V then
∥u + v∥ ≤ ∥u∥ + ∥v∥.
Proof. By definition, we have
∥u + v∥2 =< u + v, u + v >
= < u, u > +2 < u, v > + < v, v >
≤ < u, u > +2| < u, v > |+ < v, v >
= ∥u∥2 + 2∥u∥∥v∥ + ∥v∥2
(By Cauchy − Schwarz Inequality)
= (∥u∥ + ∥v∥)2
Taking positive square root on both side, we get
∥u + v∥ ≤ ∥u∥ + ∥v∥.
This completes the proof.
Definition 2.5. Orthogonal vectors
Let V be real inner product space then two vectors u, v ∈ V are orthogonal if
< u, v >= 0.
Theorem 2.6. Generalized Pythagoras theorem
If u, v are any two orthogonal vectors in a real inner product space V then
∥u + v∥2 = ∥u∥2 + ∥v∥2 .
Proof. Since u, v are orthogonal vectors in a real inner product space V ,
< u, v >= 0
By definition,
∥u + v∥2 =< u + v, u + v >
=< u, u > +2 < u, v > + < v, v >
=< u, u > +2(0)+ < v, v >
= ∥u∥2 + ∥v∥2
This completes the proof.
(2.5)
CHAPTER 2. INNER PRODUCT SPACES
78
Theorem 2.7. Converse of Generalized Pythagoras theorem
If u, v are two vectors in a real inner product space V such that
∥u + v∥2 = ∥u∥2 + ∥v∥2 .
(2.6)
then u, v are orthogonal vectors.
Proof. From definition, we have
∥u + v∥2 =< u + v, u + v >
= < u, u > +2 < u, v > + < v, v >
= ∥u∥2 + 2 < u, v > +∥v∥2
(2.7)
From (2.6) and (2.7), we get
< u, v >= 0.
∴ u, v are orthogonal vectors.
This completes the proof.
2.5
Orthonormal basis and Orthonormal projection
In many practical problems involving vector spaces, we can choose a appropriate
basis for requirement. In inner product spaces the solution of the problem is often
greatly simplified by choosing a basis in which any two distinct vectors are orthogonal. In this section, we study orthogonal basis of an inner product space.
Definition 2.6. Orthogonal set, Orthonormal set and Orthonormal basis
Let V be real inner product space then nonempty subset S of V is orthogonal set if
< u, v >= 0, ∀ u, v ∈ S where u ̸= v.
A vector in a inner product space V is called as unit vector if ∥v∥ = 1.
Moreover, orthogonal set S is orthonormal if ∥v∥ = 1, ∀ v ∈ S.
A basis B of an inner product space is orthonormal basis if B is orthonormal set.
Theorem 2.8. (Linearly independence of orthogonal set of nonzero vectors)
If S = {v1 , v2 , ..., vn } is an orthogonal set of nonzero vectors in an inner product
space V , then S is linearly independent.
CHAPTER 2. INNER PRODUCT SPACES
79
Proof. To prove S is linearly independent we have to show that if
n
∑
ki vi = 0 then
i=1
each scalar ki = 0.
n
∑
Assume that
ki vi = 0. Then for each vj ∈ S we have,
i=1
⟨∑
n
n
∑
⟩
ki vi , vj
=< 0, vj >= 0
i=1
ki < vi , vj > + kj < vj , vj >= 0
(2.8)
i=1,i̸=j
Now S is orthogonal set of nonzero vectors implies that
< vi , vj >= 0, i ̸= j and < vj , vj ≯= 0.
(2.9)
Using equation (2.9) in (2.8), we get kj = 0 for each j = 1, 2, ..., n.
Therefore, S is linearly independent set in V .
This completes the proof.
Example 2.12. Let P2 have inner product < p, q >= a0 b0 + a1 b1 + a2 b2 for
p(x) = a0 + a1 x + a2 x2 and q(x) = b0 + b1 x + b2 x2 ∈ P2 .
Let p(x) = 1 + 2x + 2x2 , q(x) = 2 + x − 2x2 , r(x) = 2 − 2x + x2 in P2 . Show
that the set of vectors B = {p, q, r} is orthogonal basis but not orthonormal basis for
Euclidean inner product space P2 .
Solution: We have
< p, q > = (1)(2) + (2)(1) + (2)(−2) = 0
< p, r > = (1)(2) + (2)(−2) + (2)(1) = 0
< q, r > = (2)(2) + (1)(−2) + (−2)(1) = 0
√
∥p∥ = 12 + 22 + 22 = 3 ̸= 1
From this, B = {p, q, r} is orthogonal set and hence linearly independent set for
inner product space P2 but not orthonormal set.
Moreover, n(B) = dimP2 = 3.
Therefore, B = {p, q, r} is orthogonal basis but not orthonormal basis for inner
product space P2 .
CHAPTER 2. INNER PRODUCT SPACES
80
Example 2.13. Let u = (cost, sint), v = (−sint, cost) in R2 . Show that the set of
vectors B = {u, v} is orthonormal basis for Euclidean inner product space R2 for
any real t.
Solution: We have
< u, v > = −cost sint + sint cost = 0
√
∥u∥ = cos2 t + sin2 t = 1
√
∥v∥ = (−sint)2 + cos2 t = 1
From this, B = {u, v} is orthonormal set and hence linearly independent set in
Euclidean inner product space R2 for any real t.
Moreover, n(B) = dimR2 = 2.
Therefore, B = {u, v} is orthonormal basis in Euclidean inner product space R2 for
any real t.
Example 2.14. Let u = (cost, sint, 0), v = (−sint, cost, 0), w = (0, 0, 1) in R3 .
Show that the set of vectors B = {u, v, w} is orthonormal basis for Euclidean inner
product space R3 for any real t.
Solution: We have
< u, v > = −cost sint + sint cost + 0 = 0
< u, w > = 0 + 0 + 0 = 0
< v, w > = 0 + 0 + 0 = 0
√
∥u∥ = cos2 t + sin2 t + 0 = 1
√
∥v∥ = (−sint)2 + cos2 t + 0 = 1
√
∥w∥ = 02 + 02 + 12 = 1
From this, B = {u, v, w} is orthonormal set and hence linearly independent set in
Euclidean inner product space R3 for any real t.
Moreover, n(B) = dimR3 = 3.
Therefore, B = {u, v, w} is orthonormal basis in Euclidean inner product space R3
for any real t.
Definition 2.7. Orthogonal Projection
Let u and v be two vectors in n dimensional vector space V . Then,the vector αu
CHAPTER 2. INNER PRODUCT SPACES
81
where α is nonzero scalar, is called as orthogonal projection of the vector v on the
vector u if αu and v − αu are orthogonal, i.e. < αu, v − αu >= 0.
This is equivalent to
< v, u >
orthogonal projection of v on u = P roju v =
u.
< u, u >
The orthogonal component of u to v is u − P roju v.
The orthogonal projection of u onto a subspace W with orthogonal basis {v1 , v2 , · · · , vk }
is denoted by P rojW u and is defined as
k
∑
< u, vi >
P rojW u =
vi .
<
v
,
v
>
i
i
i=1
Example 2.15. Resolve the vector v = (1, 2, 1) into two perpendicular components,one along u = (2, 1, 2).
Solution: Let u = (2, 1, 2) , v = (1, 2, 1).
The orthogonal projection of the vector v along u is,
< v, u >
P roju v =
u
< u, u >
6
= (2, 1, 2)
9
2
= (2, 1, 2).
3
The orthogonal component of v to u is,
2
v − P roju v = (1, 2, 1) − (2, 1, 2)
3
1
= (−1, 4, −1).
3
2.6
Gram-Schmidt Process of Orthogonalization
Let S = {v1 , v2 , ..., vn } be a basis of an inner product space V . Then,
Step 1: Let u1 = v1 .
Step 2: Let
< v2 , u1 >
u2 = v2 −
u1 .
∥u1 ∥2
CHAPTER 2. INNER PRODUCT SPACES
82
Step 3: Let
< v3 , u2 >
< v3 , u 1 >
u1 −
u2 .
2
∥u1 ∥
∥u2 ∥2
In this way suppose u1 , u2 , ..., ur are find out then,
Step 4: Let
r
∑
< vr+1 , ui >
ur+1 = vr+1 −
ui , r = 1, 2, ....n − 1.
2
∥u
∥
i
i=1
u3 = v3 −
These constructed vectors gives orthogonal set S ′ = {u1 , u2 , ..., un } containing n
vectors and it is linearly independent. Therefore S ′ is orthogonal basis. Normalizing
S ′ , we get orthonormal basis
/
{
}
ui
B = wi =
i = 1, 2, ..., n of V.
∥ui ∥
This completes the process.
Example 2.16. Let R3 have the Euclidean inner product. Use Gram-Schmidt process to convert basis B = {u1 , u2 , u3 } where u1 = (1, 1, 1), u2 = (0, 1, 1), u3 = (0, 0, 1)
into an orthonormal basis.
Solution: By Gram-Schmidt process,
√
√
Step 1: v1 = u1 = (1, 1, 1), ∥v1 ∥ = 12 + 12 + 12 = 3
Step 2:
< u2 , v 1 >
v1
∥v1 ∥2
(
)
2
2 1 1
= (0, 1, 1) − (1, 1, 1) = − , ,
3
3 3 3
√(
√
)2 ( )2 ( )2 √
1
1
2
2
6
+
+
=
=
∥v2 ∥ =
−
3
3
3
3
3
v2 = u2 −
Step 3 :
< u 3 , v1 >
< u3 , v 2 >
v
−
v2
1
∥v1 ∥2
∥v2 ∥2
(
)
1
1/3
2 1 1
= (0, 0, 1) − (1, 1, 1) −
− , ,
3
2/3
3 3 3
v3 = u3 −
CHAPTER 2. INNER PRODUCT SPACES
83
(
)
1 1
v3 = 0, − ,
2 2
√
(
)2 ( )2 √
1
1
1
∥v3 ∥ = 02 + −
+
=
2
2
2
(
(
)
)
The constructed vectors v1 = (1, 1, 1), v2 = − 23 , 31 , 13 , v3 = 0, − 21 , 12 forms the
orthogonal basis vectors of R3 .
Now normalizing v1 , v2 , v3 we get,
(
)
v1
1 1 1
w1 =
= √ ,√ ,√
∥v1 ∥
3 3 3
v2
1
w2 =
= √ (−2, 1, 1)
∥v2 ∥
6
v3
1
w3 =
= √ (0, −1, 1)
∥v3 ∥
2
The normalized vectors w1 , w2 , w3 forms the orthonormal basis vectors of R3 .
Example 2.17. Let R3 have the Euclidean inner product. Use Gram-Schmidt process to convert basis B = {u1 , u2 , u3 } where u1 = (1, 0, 1), u2 = (−1, 1, 0), u3 =
(−3, 2, 0) into an orthonormal basis.
Solution: By Gram-Schmidt process,
√
√
Step 1: v1 = u1 = (1, 0, 1), ∥v1 ∥ = 12 + 02 + 12 = 2
Step 2:
< u 2 , v1 >
v2 = u2 −
v1
∥v1 ∥2
(−1)
= (−1, 1, 0) −
(1, 0, 1)
2
(1)
=
(−1, 2, 1)
2
Step 3 :
< u3 , v 2 >
< u 3 , v1 >
v
−
v2
v3 = u3 −
1
∥v1 ∥2
∥v2 ∥2
−3
7/2 1
= (−3, 2, 0) −
(1, 0, 1) −
(−1, 2, 1)
2
3/2 2
1
= (−1, −1, 1).
3
CHAPTER 2. INNER PRODUCT SPACES
84
The constructed vectors v1 = (1, 0, 1), v2 =
the orthogonal basis vectors of R3 .
Now normalizing v1 , v2 , v3 we get,
(1)
2 (−1, 2, 1),
v3 = 13 (−1, −1, 1) forms
v1
1
= √ (1, 0, 1)
∥v1 ∥
2
1
v2
= √ (−1, 2, 1)
w2 =
∥v2 ∥
6
v3
1
w3 =
= √ (−1, −1, 1).
∥v3 ∥
3
w1 =
The normalized vectors w1 , w2 , w3 forms the orthonormal basis vectors of R3 .
Example 2.18. Let P2 over R have the inner product,
∫ 1
p(t)q(t)dt,
< p, q >=
−1
where p and q are polynomials in P2 . Use Gram-Schmidt process to convert basis
B = {u1 , u2 , u3 } where u1 = 1, u2 = 1 + t, u3 = t + t2 into an orthonormal basis.
Solution: By Gram-Schmidt process,
∫1
Step 1: v1 = u1 = 1, ∥v1 ∥2 = −1 1dt = 2
Step 2:
< u 2 , v1 >
v1
∥v1 ∥2
< 1 + t, 1 >
2
=1+t−
1 = (1 + t) − = t
2
2
v2 = u2 −
Step 3 :
< u 3 , v2 >
< u3 , v1 >
v
−
v2
1
∥v1 ∥2
∥v2 ∥2
< t + t2 , 1 >
< t + t2 , t >
= t + t2 −
1−
t
2
2/3
2/3
2/3
1
= 1 + t2 −
1−
t = t2 − .
2
2/3
3
v3 = u3 −
The constructed vectors v1 = 1, v2 = t, v3 = t2 −
vectors of P2 .
1
3
forms the orthogonal basis
CHAPTER 2. INNER PRODUCT SPACES
85
Now normalizing v1 , v2 , v3 we get,
√
v1
2
=
w1 =
∥v1 ∥
2
√
v2
6
w2 =
=
t
∥v2 ∥
2
√
v3
10 2
w3 =
=
(3t − 1).
∥v3 ∥
4
The normalized vectors w1 , w2 , w3 forms the orthonormal basis vectors of P2 .
Exercise: 2.2
1. Let R2 have the Euclidean inner product. Use the Gram-Schmidt process to
transform basis vectors u1 = (1, −3), u2 = (2, 2) into an orthonormal basis.
2. Let R3 have the Euclidean inner product. Use the Gram-Schmidt process to
transform basis vectors u1 = (1, 1, 1), u2 = (−1, 1, 0), u3 = (1, 2, 1) into an
orthonormal basis.
3. Let R4 have the Euclidean inner product. Use the Gram-Schmidt process to
transform basis vectors u1 = (0, 2, 1, 0), u2 = (1, −1, 0, 0), u3 = (1, 2, 0, −1), u4 =
(1, 0, 0, 1) into an orthonormal basis.
4. Let R3 have the Euclidean inner product. Use the Gram-Schmidt process to
transform basis vectors u1 = (1, 1, 1), u2 = (−1, 1, 0), u3 = (1, 2, 1) into an
orthonormal basis.
5. Verify that the vectors v1 = (1, −1, 2, 1), v2 = (−2, 2, 3, 2), v3 = (1, 2, 0, −1), v4 =
(1, 0, 0, 1) form an orthogonal basis for Euclidean space R4 .
6. Verify that the vectors v1 = (−3/5, 4/5, 0), v2 = (4/5, 3/5, 0), v3 = (0, 0, 1) form
an orthogonal basis for R3 .
7. Let the P2 have inner product
∫
1
p(x)q(x)dx
−1
Apply Gram-Schmidt process to transform basis vectors u1 = 1, u2 = x, u3 = x2
into an orthonormal basis.
Chapter 3
LINEAR TRANSFORMATION
3.1
Introduction
In this chapter we shall study functions from an arbitrary vector space to another
arbitrary vector space and its various properties. The aim of such study is to show
how a linear transformation (mapping or function) can be represented by a matrix.
The matrix of a linear transformation is uniquely determined for a particular basis.
For different choices of the bases the same linear transformation can be represented
by different matrices.
3.2
Definition and Examples of Linear Transformation
Definition 3.1. Linear Transformation (or mapping or function)
Let V and W be two real vector spaces. A function T : V −→ W is said to be linear
transformation if it satisfy following axioms for all vectors u, v in V and scalar k
L1 : T (u + v) = T (u) + T (v),
L2 : T (ku) = kT (u).
Theorem 3.1. A function T : V −→ W is linear transformation if and only if
T (k1 u1 + k2 u2 ) = k1 T (u1 ) + k2 T (u2 )
(3.1)
for any vectors u1 and u2 in V and any scalars k1 and k2 .
Proof. : Suppose T is the linear transformation then for vectors u1 and u2 in V and
86
CHAPTER 3. LINEAR TRANSFORMATION
87
scalars k1 and k2 , we have
T (k1 u1 + k2 u2 ) = T (k1 u1 ) + T (k2 u2 )
(By L1)
= k1 T (u1 ) + k2 T (u2 )
(By L2)
Thus, (3.1) hold.
Conversely, suppose (3.1) hold, then for u1 = u, u2 = v ∈ V and
for k1 = 1 = k2 , we have
T (u + v) = T (u) + T (v)
Therefore, T satisfy L1.
Again, for k1 = k, k2 = 0 and for any u1 = u, u ∈ V we have,
T (ku) = kT (u).
∴ L2 is satisfied.
Therefore T is the linear transformation.
Remark 3.1. In general, if T : V → W is a linear transformation, then
T (k1 u1 + k2 u2 + · · · + kn un ) = k1 T (u1 ) + k2 T (u2 ) + · · · + kn T (un )
for any vectors u1 , u2 , · · · , un in V and any scalars k1 , k2 , · · · , kn .
Definition 3.2. Domain, Codomain and range of linear transformation
If function T : V −→ W is linear transformation then V is called the domain of T
and W is called co-domain of T , while T (V ) is range of linear transformation T .
Definition 3.3. One-to-one Linear Transformation
A transformation T : V −→ W is one-to-one(injective) if distinct elements of V are
transformed into distinct elements of W .
That is if u ̸= v ⇒ T (u) ̸= T (v).
Definition 3.4. Onto Linear Transformation
A transformation T : V −→ W is said to be onto(surjective) if for given w ∈ W
there exist v ∈ V such that T (v) = w i.e. if range of T is the entire co-domain W .
A linear transformation which is both one-one and onto is called bijective.
A bijective linear transformation is called isomorphism.
CHAPTER 3. LINEAR TRANSFORMATION
88
Example 3.1. Identity transformation
The transformation I : V −→ V defined as I(u) = u, for every u in V is a linear
transformation from V to V . This transformation is called as identity transformation on V .
Example 3.2. Zero(Null) transformation
The transformation
Θ : V −→ W defined as Θ(u) = 0,
for every u in V is a linear transformation from V to W . This transformation is
called as zero(null) transformation on V .
Example 3.3. Matrix transformation
Let A be fixed m × n matrix with real entries as



a11 a12 · · · a1n
x1
 a


 21 a22 · · · a2n 
 x2
A =  ..
and
X
=

 ..
..
.
 .
 .
. · · · .. 
am1 am2 · · · amn
xn
is any vector in Rn . Then AX is a vector in
Define a function TA : Rn −→ Rm as

a11 a12
 a
 21 a22
TA (X) = AX =  ..
..
 .
.
am1 am2





Rm .
· · · a1n
· · · a2n
.
· · · ..
· · · amn





x1
x2
..
.
xn



.

Then T is a linear transformation, called as matrix transformation.
Example 3.4. Let transformation T : R2 −→ R2 be defined as
T (x1 , x2 ) = (x1 − x2 , x1 + x2 ).
Then T is a linear transformation.
Solution: Let u = (x1 , x2 ), v = (y1 , y2 ) ∈ R2 then u + v = (x1 + y1 , x2 + y2 ).
CHAPTER 3. LINEAR TRANSFORMATION
89
Consider
T (u + v) = T (x1 + y1 , x2 + y2 )
= (x1 + y1 − x2 − y2 , x1 + y1 + x2 + y2 )
= (x1 − x2 , x1 + x2 ) + (y1 − y2 , y1 + y2 )
= T (x1 , x2 ) + T (y1 , y2 )
= T (u) + T (v)
Also, for scalar k
T (ku) = T [k(x1 , x2 )]
= T [(kx1 , kx2 )]
= (kx1 − kx2 , kx1 + kx2 )
= [k(x1 − x2 ), k(x1 + x2 )]
= k(x1 − x2 , x1 + x2 )
= kT (u)
∴ T is a linear transformation.
Example 3.5. The transformation T : R2 −→ R2 defined as
T (x1 , x2 ) = (x21 , x1 + x2 ).
Then T is not a linear transformation.
Solution: Let u = (x1 , x2 ), v = (y1 , y2 ) ∈ R2 then u + v = (x1 + y1 , x2 + y2 ).
Consider
T (u + v) = T (x1 + y1 , x2 + y2 )
= [(x1 + y1 )2 , x1 + y1 + x2 + y2 ]
= [x21 + y12 + 2x1 y1 , x1 + y1 + x2 + y2 ]
(3.2)
and
T (u) + T (v) = (x21 , x1 + x2 ) + (y12 , y1 + y2 )
= (x21 + y12 , x1 + x2 + y1 + y2 )
(3.3)
CHAPTER 3. LINEAR TRANSFORMATION
From equations (3.2) and (3.3), we get
∴ T (u + v) ̸= T (u) + T (v)
∴ T is not linear transformation.
Example 3.6. Let transformation T : R2 −→ R3 be defined as
T (x, y) = (2x, x + y, x − y).
Show that T is a linear transformation.
Solution: Let u = (x1 , x2 ), v = (y1 , y2 ) ∈ R2 then u + v = (x1 + y1 , x2 + y2 ).
Consider
T (u + v) = T (x1 + x2 , y1 + y2 )
= [2(x1 + x2 ), x1 + x2 + y1 + y2 , x1 + x2 − y1 − y2 ]
= (2x1 , x1 + y1 , x1 − y1 ) + (2x2 , x2 + y2 , x2 − y2 )
= T (u) + T (v)
Also, for scalar k
T (ku) = T [k(x1 , y1 )]
= T (kx1 , ky1 )
= (2kx1 , kx1 + ky1 , kx1 − ky1 )
= k(2x1 , x1 + y1 , x1 − y1 )
= kT (x1 , y1 )
= kT (u)
∴ T is a linear transformation.
Example 3.7. Let transformation T : V −→ V be defined as
T (u) = ku,
u in V then T is linear for any fixed scalar k.
90
CHAPTER 3. LINEAR TRANSFORMATION
91
Solution: For any u, v in V
T (u + v) = k(u + v)
= ku + kv
= T (u) + T (v)
Also, for any scalar α,
T (αu) = k(αu)
= (kα)u
= (αk)u
= α(ku)
= αT (u)
∴ T is a linear transformation.
Example 3.8. Let V be a vector space of all functions defined on [0, 1] and W be
subspace of V consisting of all continuously differentiable functions on [0, 1]. Let
D : W −→ V be defined as
D(f ) = f ′ (x)
where f ′ (x) is derivative of f (x). Show that D is a linear transforation.
Solution: For any f , g ∈ W .
D(f + g) = (f + g)′ (x)
= f ′ (x) + g ′ (x)
= D(f ) + D(g)
Also, for any scalar k
D(kf ) = (kf )′ (x)
= kf ′ (x)
= kD(f )
∴ D is a linear transformation.
CHAPTER 3. LINEAR TRANSFORMATION
92
Example 3.9. Let p = p(x) = a0 + a1 x + · · · + an xn be a polynomial in Pn and
T : Pn −→ Pn+1 defined as
T (p) = T (p(x)) = xp(x) = a0 x + a1 x2 + · · · + an xn+1 .
Show that T is a linear transformation.
Solution: For any two polynomials p1 , p2 in Pn
T (p1 + p2 ) = T (p1 (x) + p2 (x))
= x(p1 (x) + p2 (x))
= x(p1 (x)) + x(p2 (x))
= T (p1 ) + T (p2 )
Also, for any scalar k
T (kp1 ) = T (kp1 (x))
= x(kp1 (x))
= k(xp1 (x))
= kT (p1 )
∴ T is a linear transformation.
Example 3.10. Let V be a vector space of continuous functions defined on [0, 1]
and T : V −→ R defined as
∫ 1
T (f ) =
f (x)dx
0
for f in V . Show that T is a linear transformation.
Solution: For any f , g in V .
∫
1
T (f + g) =
(f + g)(x)dx
∫
0
1
=
(f (x) + g(x))dx
∫ 1
∫ 1
=
f (x)dx +
g(x)dx
0
0
= T (f ) + T (g)
0
CHAPTER 3. LINEAR TRANSFORMATION
93
Also, for any scalar k
∫
1
T (kf ) =
(kf )(x)dx
∫
0
1
=
kf (x)dx
∫
0
1
=k
f (x)dx
0
= kT (f )
∴ T is a linear transformation.
Example 3.11. Determine whether the function T : R2 −→ R2 defined as
T (x, y) = (x2 , y)
is linear?
Solution: Let u = (x1 , x2 ), v = (y1 , y2 ) ∈ R2 then u + v = (x1 + y1 , x2 + y2 ).
Consider
T (u + v) = T (x1 + x2 , y1 + y2 )
= [(x1 + x2 )2 , y1 + y2 ]
̸= (x21 , y1 ) + (x22 , y2 )
̸= T (u) + T (v)
Hence, T does not preserve additivity.
Also, we can check T does not preserve scalar multiplication.
Alternatively, we can check this failure numerically.
For example,
T ((1, 1) + (2, 0)) = T (3, 1)
= (9, 1)
̸= T (1, 1) + T (2, 0) = (5, 1).
CHAPTER 3. LINEAR TRANSFORMATION
Exercise: 3.1
1. Determine whether the following transformation T : R2 −→ R2 is linear,
if not, justify?
(a) T (x, y) = (y, y) (b) T (x, y) = (−y, x) (c) T (x, y) = (2x, y)
(d) T (x, y) = (x, y 2 ) (e) T (x, y) = (x, 0) (f) T (x, y) = (2x + y, x − y)
√ √
(g) T (x, y) = (x + 1, y) (h) T (x, y) = ( 3 x, 3 y).
(i) T (x, y) = (x + 2y, 3x − y).
2. Determine whether the following transformation T : R3 −→ R2 is linear,
if not, justify?
(a) T (x, y, z) = (x, x + y + z).
(b) T (x, y, z) = (1, 1).
(c) T (x, y, z) = (0, 0).
(d) T (x, y, z) = (3x − 4y, 2x − 5z).
(e) T (x, y, z) = (2x − y + z, y − 4z).
3. Determine whether the following transformation T : R3 −→ R3 is linear,
if not, justify?
(a) T (x, y, z) = (x, x + y, z).
(b) T (x, y, z) = (1, 1, 1).
(c) T (x, y, z) = (0, 0, 0).
(d) T (x, y, z) = (x + 1, y + 1, z − 1).
(e) T (x, y, z) = (x, 2y, 3z).
(f) T (x, y, z) = (ex , ey , 0).
(g) T (x, y, z) = (−x, −y, −z).
(h) T (x, y, z) = (x, y, 0).
(i) T (x, y, z) = (x, 1, 0).
94
CHAPTER 3. LINEAR TRANSFORMATION
95
(j) T (x, y, z) = (x − y, y − z, z − x).
4. Determine whether the following transformation T : M2×2 −→ R is linear,
if not, justify?
([
])
a b
(a) T
= 3a − 4b + c − d.
c d
([
(b) T
([
(c) T
([
(d) T
a b
c d
a b
c d
a b
c d
])
[
= det
]
a b
.
c d
])
= b + c.
])
= a2 + b2 .
5. Determine whether the following transformation T : P2 −→ P2 is linear,
if not, justify?
(a) T (a0 + a1 x + a2 x2 ) = a0 + a1 (x + 1) + a2 (x + 1)2 .
(b) T (a0 + a1 x + a2 x2 ) = (a0 + 1) + (a1 + 1)x + (a2 + 1)x2 .
3.3
Properties of Linear Transformation
Theorem 3.2. If T : V −→ W is a linear transformation, then
(a) T (0) = 0
Proof. : We have,
T (0) = T (0.u)
= 0.T (u)
=0
(b) T (−u) = −T (u), for all u in V
(T is L.T.)
CHAPTER 3. LINEAR TRANSFORMATION
96
Proof. : We have,
T (−u) = T ((−1)u)
= (−1)T (u)
(T is L.T.)
= −T (u)
(c) T (u − v) = T (u) − T (v), for all u and v in V
Proof. : We have,
T (u − v) = T (u + (−1)v)
= T (u) + T ((−1)v)
(T is L.T.)
= T (u) + (−1)T (v)
(T isL.T.)
= T (u) − T (v)
(d) T (u + u + · · · + u(ntimes)) = T (nu) = nT (u), n is a positive integer.
(e) T (−mu) = mT (−u) = −mT (u), m is a positive integer.
(f ) T ( m
n u) =
3.4
m
n T (u),
m and n are integers and n ̸= 0.
Equality of Two Linear Transformations
Definition 3.5. Two linear transformations T1 and T2 from V −→ W are equal if
and only if T1 (u) = T2 (u) for every u in V .
Theorem 3.3. If {u1 , u2 , · · · , un } is a basis for V and w1 , w2 , · · · , wn are arbitrary
n vectors in W , not necessarily distinct, then there exists a unique linear transformation T : V −→ W such that
T (ui ) = wi , i = 1, 2, · · · , n.
CHAPTER 3. LINEAR TRANSFORMATION
97
Proof. : Since, B = {u1 , u2 , · · · , un } is a basis for V .
∴ L(B) = V.
For u in V , there exists scalars α1 , α2 , · · · , αn such that
u = α1 u1 + α2 u2 + · · · + αn un
(3.4)
In fact, α1 , α2 , · · · , αn are co-ordinates of u relative to the basis B.
Therefore, we define a map T : V −→ W as
T (u) = α1 w1 + α2 w2 + · · · + αn wn .
Claim(I): T is linear transformation
Let u, v in V ,
u = α1 u1 + α2 u2 + · · · + αn un
v = β1 u1 + β2 u2 + · · · + βn un
u + v = (α1 + β1 )u1 + (α2 + β2 )u2 + · · · + (αn + βn )un
By definition of T , we have
T (u + v) = (α1 + β1 )w1 + (α2 + β2 )w2 + · · · + (αn + βn )wn
= (α1 w1 + α2 w2 + · · · + αn wn ) + (β1 w1 + β2 w2 + · · · + βn wn )
= T (u) + T (v)
Also, for any scalar k
ku = k(α1 u1 + α2 u2 + · · · + αn un )
ku = (kα1 )u1 + (kα2 )u2 + · · · + (kαn )un
By definition of T ,
T (ku) = T [(kα1 )u1 + (kα2 )u2 + · · · + (kαn )un ]
= kα1 T (u1 ) + kα2 T (u2 ) + · · · + kαn T (un )
= (kα1 )w1 + (kα2 )w2 + · · · + (kαn )wn
= k(α1 w1 + α2 w2 + · · · + αn wn )
= kT (u)
(3.5)
CHAPTER 3. LINEAR TRANSFORMATION
98
∴ T is linear transformation.
Since,
ui = 0.u1 + 0.u2 + · · · + 1.ui + · · · + 0.un
T (ui ) = 0.w1 + 0.w2 + · · · + 1.wi + · · · + 0.wn
= wi
∴ T is linear transformation such that
T (ui ) = wi .
Claim(II): Uniqueness of linear transformation
Suppose there are two linear transformation T and T ′ such that,
T (ui ) = wi
and
T ′ (ui ) = wi , i = 1, 2, · · · , n.
Then for u in V ,
T (u) = α1 w1 + α2 w2 + · · · + αn wn
= α1 T ′ (u1 ) + α2 T ′ (u2 ) + · · · + αn T ′ (un )
= T ′ (α1 u1 ) + T ′ (α2 u2 ) + · · · + T ′ (αn un )
= T ′ (α1 u1 + α2 u2 + · · · + αn un )
= T ′ (u)
T (u) = T ′ (u)
T = T′
Hence, proof is completed.
Example 3.12. Let u1 = (1, 1, −1), u2 = (4, 1, 1), u3 = (1, −1, 2) be the basis vectors of R3 and Let T : R3 −→ R2 be the linear transforation such that T (u1 ) =
(1, 0), T (u2 ) = (0, 1), T (u3 ) = (1, 1). Find T .
Solution: Since {u1 , u2 , u3 } be the basis of R3 .
For, (a, b, c) in R3 , there exists scalars α1 , α2 , α3 such that
(a, b, c) = α1 u1 + α2 u2 + α3 u3
= α1 (1, 1, −1) + α2 (4, 1, 1) + α3 (1, −1, 2)
= (α1 + 4α2 + α3 , α1 + α2 − α3 , −α1 + α2 + 2α3
CHAPTER 3. LINEAR TRANSFORMATION
99
Hence,
α1 + 4α2 + α3 = a
α1 + α2 − α3 = b
−α1 + α2 + 2α3 = c
After solving, we get
α1 = 3a − 7b − 5c
α2 = −a + 3b + 2c
α3 = 2a − 5b − 3c
T (a, b, c) = T (α1 u1 + α2 u2 + α3 u3 )
= α1 T (u1 ) + α2 T (u2 ) + α3 T (u3 )
= (3a − 7b − 5c)(1, 0) + (−a + 3b + 2c)(0, 1) + (2a − 5b − 3c)(1, 1)
= (5a − 12b − 8c, a − 2b − c).
Example 3.13. Let T : R3 −→ R3 be a linear transformation such that
T (1, 0, 0) = (2, 4, −1), T (0, 1, 0) = (1, 3, −2), T (0, 0, 1) = (0, −2, 2).
Compute T (−2, 4, −1).
Solution: We have
(−2, 4, −1) = −2(1, 0, 0) + 4(0, 1, 0) − (0, 0, 1)
∴ T (−2, 4, −1) = −2T (1, 0, 0) + 4T (0, 1, 0) − T (0, 0, 1)
= −2(2, 4, −1) + 4(1, 3, −2) − (0, −2, 2)
= (0, 6, −8).
Example 3.14. Let T : R3 −→ R2 be the linear transforation such that T (1, 0, 0) =
(1, 1), T (0, 1, 0) = (3, 0), T (0, 0, 1) = (4, −7). Find T (a, b, c) and hence obtain
T (1, −2, 3).
CHAPTER 3. LINEAR TRANSFORMATION
100
Solution: We have,
(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1)
∴ T (a, b, c) = aT (1, 0, 0) + bT (0, 1, 0) + cT (0, 0, 1)
= a(1, 1) + b(3, 0) + c(4, −7)
= (a + 3b + 4c, a − 7c)
∴ T (1, −2, 3) = (1 − 6 + 12, 1 − 21)
= (7, −20).
(
)
−1 2 1 3 4
Example 3.15. Let A =
. Let T : R5 −→ R2 be a linear
0 0 2 −1 0
transformation such that T (x) = Ax. Compute T (1, 0, −1, 3, 0).
Solution: Since, T (x) = Ax

1

(
)
0
−1 2 1 3 4 
 −1
T (1, 0, −1, 3, 0) =
0 0 2 −1 0 
 3
0
(
)
7
∴ T (1, 0, −1, 0) =
.
−5

 (
)

7
=

−5

Example 3.16. Let T : R2 −→ R2 be a linear transformation such that T (1, 1) =
(0, 2) and T (1, −1) = (2, 0). Compute T (1, 4) and T (−2, 1).
Solution: We have
(1, 4) = a(1, 1) + b(1, −1).
Solving, we get
3
5
a= , b=−
2
2
5
3
(1, 4) = (1, 1) − (1, −1)
2[
2
]
5
3
T (1, 4) = T (1, 1) − (1, −1)
2
2
CHAPTER 3. LINEAR TRANSFORMATION
101
5
3
T (1, 4) = T (1, 1) − T (1, −1)
2
2
5
3
= (0, 2) − (2, 0)
2
2
= (−3, 5)
Similarly,
(−2, 1) = a(1, 1) + b(1, −1)
Solving, we get
1
3
(−2, 1) = − (1, 1) − (1, −1)
2
2
1
3
T (−2, 1) = − T (1, 1) − T (1, −1)
2
2
1
3
= − (0, 2) − (2, 0)
2
2
= (−3, −1).
Exercise: 3.2
(1) Consider the basis S = {v1 , v2 , v3 } for R3 , where v1 = (1, 1, 1), v2 = (1, 1, 0),
v3 = (1, 0, 0) and let T : R3 −→ R2 be the linear transformation such that
T (v1 ) = (1, 0), T (v2 ) = (2, −1), T (v3 ) = (4, 3). Find a formula T (x1 , x2 , x3 ) and
use it to compute T (2, −3, 5).
(2) Consider the basis S = {v1 , v2 } for R2 , where v1 = (1, 1), v2 = (1, 0) and let
T : R2 −→ R2 be the linear operator such that T (v1 ) = (1, −2), T (v2 ) =
(−4, 1). Find a formula T (x1 , x2 ) and use it to compute T (5, −3).
(3) Consider the basis S = {v1 , v2 } for R2 , where v1 = (−2, 1), v2 = (1, 3) and
let T : R2 −→ R3 be the linear transformation such that T (v1 ) = (−1, 2, 0),
T (v2 ) = (0, −3, 5). Find a formula T (x1 , x2 ) and use it to compute T (2, −3).
(4) Consider the basis S = {v1 , v2 , v3 } for R3 , where v1 = (1, 1, 1), v2 = (1, 1, 0),
v3 = (1, 0, 0) and let T : R3 −→ R3 be linear operator such that T (v1 ) =
(2, −1, 4), T (v2 ) = (3, 0, 1), T (v3 ) = (−1, 5, 1). Find a formula T (x1 , x2 , x3 ) and
use it to compute T (2, 4, −1).
CHAPTER 3. LINEAR TRANSFORMATION
102
(5) Consider the basis S = {v1 , v2 , v3 } for R3 , where v1 = (1, 2, 1), v2 = (2, 9, 0),
v3 = (3, 3, 4) and let T : R3 −→ R2 be linear transformation such that T (v1 ) =
(1, 0), T (v2 ) = (−1, 1), T (v3 ) = (0, 1). Find a formula T (x1 , x2 , x3 ) and use it
to compute T (7, 13, 7).
(e) Letv1 , v2 , v3 be vectors in a vector space V and T : V −→ R3 is a linear transformation for which T (v1 ) = (1, −1, 2), T (v2 ) = (0, 3, 2), T (v3 ) = (−3, 1, 2).
Find T (2v1 − 3v2 + 4v3 ).
3.5
Kernel And Rank of Linear Transformation
Definition 3.6. Kernel of a Linear Transformation
Let T : V −→ W be linear transformation, then the set of all vectors in V that T
maps to 0̄ is called Kernel (or null space) of T .
It is denoted by Ker(T )
∴ Ker(T ) = {u ∈ V /T (u) = 0̄}.
Definition 3.7. Range of Linear Transformation
Let T : V −→ W be linear transformation, then the set of all vectors in W that are
images under T of at least one vector in V is called range of T .
It is denoted by R(T )
∴ R(T ) = {w ∈ W/w = T (u), f or some u ∈ V }.
Note: (i) If T : V −→ W is zero linear transformation then Ker(T ) = V and
R(T ) = 0̄.
(ii) If I : V −→ W is Identity transformation then Ker(I) = (0̄) and R(I) = V .
CHAPTER 3. LINEAR TRANSFORMATION
103
Definition 3.8. Rank and Nullity of Linear Transformation
Let T : V −→ W be linear transformation. The dimension of range of T is called
the rank of T and it is denoted by rank(T ).
The dimension of kernel of T is called the nullity of T and it is denoted by nullity(T ).
Theorem 3.4. Let T : V −→ W be linear transformation then
(a) The kernel of T is a subspace of V.
(b) The range of T is a subspace of W.
Proof. (a) Since, T (0) = 0̄, 0̄ ∈ ker(T ).
∴ ker(T ) is non empty.
For u, v in ker(T ),
T (u) = 0̄, T (v) = 0̄
then
T (u + v) = T (u) + T (v)
= 0̄ + 0̄
= 0̄
∴ u + v ∈ ker(T ).
Also, for any scalar k
T (ku) = kT (u)
= k.0̄
= 0̄
∴ ku ∈ ker(T ).
∴ The kernel of T is subspace of V.
(b) Let w1 , w2 be any vectors in R(T ), then there are u1 , u2 in V such that
T (u1 ) = w1 and T (u2 ) = w2 .
Since, u1 , u2 in V , u1 + u2 is in V .
∴ T (u1 + u2 ) ∈ R(T ).
CHAPTER 3. LINEAR TRANSFORMATION
104
Consider
T (u1 + u2 ) = T (u1 ) + T (u2 )
= w1 + w2
T (u1 + u2 ) = w1 + w2
Now w1 + w2 is in R(T ) and for any scalar k, ku1 is in V .
∴ T (ku1 ) ∈ R(T ).
Consider
T (ku1 ) = kT (u1 )
= kw1
T (ku1 ) = kw1
∴ kw1 ∈ R(T ).
∴ The range of T is subspace of V.
Theorem 3.5. A linear transformation T : V −→ W is injective if and only if
ker(T ) = {0̄}, that is if and only if null(T ) = {0̄}.
Proof. step(I): Assume that linear transformation T : V −→ W is injective.
Claim: To prove ker(T ) = 0̄
Let u ∈ ker(T ) then
T (u) = 0̄
= T (0̄)
T (u) = T (0̄)
u = 0̄
(T is injective)
0̄ ∈ ker(T )
ker(T ) = 0̄.
Step(II): Assume that for linear transformation T : V −→ W, ker(T ) = 0̄
Claim: To prove T : V −→ W is injective.
CHAPTER 3. LINEAR TRANSFORMATION
105
For u, v ∈ V , consider
T (u) = T (v)
T (u) − T (v) = 0̄
T (u − v) = 0̄
(T is linear transformation)
u − v ∈ ker(T )
u − v = 0̄
(ker(T ) = 0̄)
u = v.
∴ T : V −→ W is injective.
Theorem 3.6. Let the linear transformation T : V −→ W is injective and {u1 , u2 , · · · , uk }
be a set of linearly independent vectors in V . Then {T (u1 ), T (u2 ), · · · , T (uk )} also
linearly independent.
Proof. For scalars, α1 , α2 , · · · , αk , we have
α1 T (u1 ) + α2 T (u2 ) + · · · + αk T (uk ) = 0
T (α1 u1 + α2 u2 + · · · + αk uk ) = 0
(T is L.T.)
T (α1 u1 + α2 u2 + · · · + αk uk ) = T (0)
α1 u1 + α2 u2 + · · · + αk uk = 0
(T is injective)
α1 = 0, α2 = 0, · · · , αk = 0
(u1 , u2 , · · · , uk are linearly independent vectors)
∴ {T (u1 ), T (u2 ), · · · , T (uk )} is linearly independent.
Result(1) T : Rn −→ Rm is multiplication by an m × n matrix A, then
(a) The kernel of T is the null space of A.
(b) The range of T is the column null space of A.
Result(2) T : Rn −→ Rm is multiplication by an m × n matrix A, then
(a) nullity(T ) = nullity(A).
CHAPTER 3. LINEAR TRANSFORMATION
106
(b) rank(T ) = rank(A).
Theorem 3.7. Dimension Theorem for Linear Transformation
Let T : V −→ W is linear transformation from n-dimensional vector space V to
vector space W , then
rank(T ) + nullity(T ) = n
or
dim R(T ) + dim ker(T ) = dimV.
Proof. : Let {u1 , u2 , · · · , ur } be a basis of ker(T ), where r = dim ker(T ) ≤ n and
ker(T ) is subspace of V .
Then
nullity(T ) = r ≤ n.
Since, {u1 , u2 , · · · , ur } is the basis for subspace of V , it is linearly independent
therefore can be extended to form basis
B = {u1 , u2 , · · · , ur , ur+1 , · · · , ur+k } f or V.
∴ r + k = n (∵ dimV = n).
Therefore, we have to prove that the k elements in the set
S = {T (ur+1 ), T (ur+2 ), · · · , T (ur+k )}
form a basis for R(T ), that is dimR(T ) = k or rank(T ) = k.
Claim(I) S Spans R(T ):
Let w be any element in R(T ), therefore there is u ∈ V such that T (u) = w.
Since, B is basis for V , u can be expressed as a linear combination of elements of B.
Therefore,
u = α1 u1 + α2 u2 + · · · + αr ur + β1 ur+1 + β2 ur+2 + · · · + βk ur+k
where, α1 , α2 , · · · , αr , β1 , β2 , · · · , βk are scalars.
Apply T on both sides of equation (3.6), we have
w = T (u)
= T (α1 u1 + α2 u2 + · · · + αr ur + β1 ur+1 + β2 ur+2 + · · · + βk ur+k )
= α1 T (u1 ) + α2 T (u2 ) + · · · + αr T (ur )
+ β1 T (ur+1 ) + β2 T (ur+2 ) + · · · + βk T (ur+k )
= β1 T (ur+1 ) + β2 T (ur+2 ) + · · · + βk T (ur+k )
(3.6)
CHAPTER 3. LINEAR TRANSFORMATION
107
Since, T (u1 ) = T (u2 ) = · · · = T (ur ) = 0̄ as u1 , u2 , · · · , ur are in ker(T ).
Thus, w is linear combination of elements in S.
∴ S spans R(T )
. Claim(II) S is Linearly independent:
We have, for scalars β1 , β2 , · · · , βk such that
β1 T (ur+1 ) + β2 T (ur+2 ) + · · · + βk T (ur+k ) = 0̄.
T (β1 ur+1 + β2 ur+2 + · · · + βk ur+k ) = 0̄.
∴ u = β1 ur+1 + β2 ur+2 + · · · + βk ur+k ∈ ker(T ).
(3.7)
Since, {u1 , u2 , · · · , ur } be a basis of ker(T ) then for u ∈ ker(T ), we have
u = α1 u1 + α2 u2 + · · · + αr ur
(3.8)
From equations (3.7) and (3.8), we get
α1 u1 + α2 u2 + · · · + αr ur = β1 ur+1 + β2 ur+2 + · · · + βk ur+k .
α1 u1 + α2 u2 + · · · + αr ur + (−β1 )ur+1 + (−β2 )ur+2 + · · · + (−βk )ur+k = 0
(3.9)
But B = {u1 , u2 , · · · , ur , ur+1 , · · · , ur+k } is a basis for V .
Then from equation (3.9), we get
α1 = α2 = · · · = αr = β1 = β2 = · · · = βk = 0
β1 = β2 = · · · = βk = 0
∴ S is Linearly independent.
∴ S is a basis for R(T ).
∴ dimR(T ) = rank(T ) = k.
∴ rank(T ) + nullity(T ) = n.
Hence, proof is completed.
Definition 3.9. Nonsingular Linear Transformation
A linear transformation T : V −→ W is said to be nonsingular if and only if it is
bijective; otherwise it is called singular linear transformation.
CHAPTER 3. LINEAR TRANSFORMATION
108
Theorem 3.8. A linear transformation T : V −→ W is nonsingular iff the set
{T (u1 ), T (u2 ), · · · , T (un )} is a basis of W whenever the set {u1 , u2 , · · · , un } is a
basis of V .
Proof. : Step(I)
Given: T : V −→ W is nonsingular linear transformation and the set
{u1 , u2 , · · · , un } is a basis of V.
Claim: Set {T (u1 ), T (u2 ), · · · , T (un )} is a basis of W .
Since T : V −→ W is nonsingular.
∴ T is bijective.
∴ T is injective.
∴ T is injective and {u1 , u2 , · · · , un } is linearly independent in V.
∴ {T (u1 ), T (u2 ), · · · , T (un )} is linearly independent in W.
Also, T is surjective.
∴ For w ∈ W , there exist u ∈ V such that
w = T (u).
But {u1 , u2 , · · · , un } is a basis of V .
∴ For u ∈ V , we have
u = α1 u1 + α2 u2 + · · · + αn un
T (u) = T (α1 u1 + α2 u2 + · · · + αn un )
T (u) = α1 T (u1 ) + α2 T (u2 ) + · · · + αn T (un )
w = α1 T (u1 ) + α2 T (u2 ) + · · · + αn T (un ).
∴ w is a linear combination of T (u1 ), T (u2 ), · · · , T (un ).
∴ {T (u1 ), T (u2 ), · · · , T (un )} spans W.
∴ {T (u1 ), T (u2 ), · · · , T (un )} is a basis of W.
Step(II)
Given: If a set {u1 , u2 , · · · , un } is a basis of V then the set {T (u1 ), T (u2 ), · · · , T (un )}
is a basis for W .
CHAPTER 3. LINEAR TRANSFORMATION
109
Claim: T : V −→ W is nonsingular linear transformation.
Since, {u1 , u2 , · · · , un } is a basis of V .
Therefore, for any two elements u, v ∈ V , we have
u = α1 u1 + α2 u2 + · · · + αn un
v = β1 u1 + β2 u2 + · · · + βn un .
Consider,
T (u) = T (v)
T (u) − T (v) = 0̄
T (u − v) = 0̄
T [(α1 u1 + α2 u2 + · · · + αn un ) − (β1 u1 + β2 u2 + · · · + βn un )] = 0̄
T [(α1 − β1 )u1 + (α2 − β2 )u2 + · · · + (αn − βn )un ] = 0̄
(α1 − β1 )T (u1 ) + (α2 − β2 )T (u2 ) + · · · + (αn − βn )T (un ) = 0̄
α1 − β1 = 0, α2 − β2 = 0, · · · , αn − βn = 0
α1 = β1 , α2 = β2 , · · · , αn = βn .
∴ u=v
T : V −→ W is injective.
Since, the set {T (u1 ), T (u2 ), · · · , T (un )} is a basis of W then for any w ∈ W , we
have
w = α1 T (u1 ) + · · · + αn T (un )
w = T (α1 u1 + · · · + αn un )
w = T (u).
Therefore, for any w ∈ W there is u ∈ V such that w = T (u).
∴ T : V −→ W is surjective.
∴ T : V −→ W is bijective.
∴ T : V −→ W is nonsingular.
Hence, proof is completed.
CHAPTER 3. LINEAR TRANSFORMATION
110
Example 3.17. Let T : R4 −→ R3 be the linear transformation given by the formula
T (x1 , x2 , x3 , x4 ) = (4x1 + x2 − 2x3 − 3x4 , 2x1 + x2 + x3 − 4x4 , 6x1 − 9x3 + 9x4 )
(i) Which of the following vectors are in ker(T )?
(a) (3, −8, 2, 0) , (b)(0, 0, 0, 1) , (c)(0, −4, 1, 0)
(ii) Which of the following vectors are in R(T )?
(a) (0, 0, 6) , (b)(1, 3, 0) , (c)(2, 4, 1).
Solution: (i) If u = (x1 , x2 , x3 , x4 ) is in ker(T ) then by definition T (u) = 0̄.
∴ (4x1 + x2 − 2x3 − 3x4 , 2x1 + x2 + x3 − 4x4 , 6x1 − 9x3 + 9x4 ) = (0, 0, 0)
Now, equating the corresponding components, we get
4x1 + x2 − 2x3 − 3x4 = 0
2x1 + x2 + x3 − 4x4 = 0
6x1 − 9x3 + 9x4 = 0
Here, (3, −8, 2, 0) satisfy above equations.
∴ (3, −8, 2, 0) ∈ ker(T ).
But (0, 0, 0, 1), (0, −4, 1, 0) not in ker(T ).
(ii) Here, for w = (a, b, c) ∈ R3 , suppose there is a u = (x1 , x2 , x3 , x4 ) ∈ R4 such
that T (u) = (a, b, c).
∴ (4x1 + x2 − 2x3 − 3x4 , 2x1 + x2 + x3 − 4x4 , 6x1 − 9x3 + 9x4 ) = (a, b, c)
This gives the system of linear equations
4x1 + x2 − 2x3 − 3x4 = a
2x1 + x2 + x3 − 4x4 = b
6x1 − 9x3 + 9x4 = c
This system is consistent (verify). Therefore, for each element (a, b, c) ∈ R3 there is
u = (x1 , x2 , x3 , x4 ) ∈ R4 such that T (u) = (a, b, c).
Therefore, given vectors (0, 0, 6), (1, 3, 0) and (2, 4, 1) are in R(T ) = R3 .
CHAPTER 3. LINEAR TRANSFORMATION
Exercise: 3.3
(1) Let T : R2 −→ R2 be the linear operator given by the formula
T (x, y) = (2x − y, −8x + 4y).
(i) Which of the following vectors are in ker(T )?
(a) (5, 10) , (b) (3, 2) , (c) (1, 1)
(ii) Which of the following vectors are in R(T )?
(a) (1, −4) , (b) (5, 0) , (c) (−3, 12).
(2) Let T : R3 −→ R3 be the linear transformation given by the formula
T (x, y, z) = (x + y − z, x − 2y + z, −2x − 2y + 2z).
(i) Which of the following vectors are in ker(T )?
(a) (1, 2, 3) , (b) (1, 2, 1) , (c) (−1, 1, 2)
(ii) Which of the following vectors are in R(T )?
(a) (1, 2, −2) , (b) (3, 5, 2) , (c) (−2, 3, 4).
(3) Let T : P2 −→ P3 be the linear transformation given by the formula
T (p(x)) = xp(x).
(i) Which of the following vectors are in ker(T )?
(a) x2 , (b) 0 , (c) 1 + x
(ii) Which of the following vectors are in R(T )?
(a) x + x2 , (b) 1 + x , (c) 3 − x2 .
(4) Find the basis for kernel of
(a) the linear operator in Example (1)
(b) the linear transformation in Example (2)
(c) the linear transformation in Example (3).
(5) Find the basis for range of
(a) the linear operator in Example (1)
(b) the linear transformation in Example (2)
(c) the linear transformation in Example (3).
(6) Verify the dimension theorem for
(a) the linear operator in Example (1)
111
CHAPTER 3. LINEAR TRANSFORMATION
112
(b) the linear transformation in Example (2)
(c) the linear transformation in Example (3).
(7) In the following cases (i) to (iv), Let
Find (a) a basis for the range of T
(b) a basis for the kernel of T
(c) the rank and nullity of T
(d) the rank
 and nullity
 of A

1 −1 3
2
(i) A =  5 6 −4 , (ii) A =  4
7 4 2
0


1 4 5 0 9
 3 −2 1 0 −1 

(iv) A = 
 −1 0 −1 0 −1 .
2 3 5 1 8
T be multiplication by matrix A.

[
]
0 −1
4
1
5
2
0 −2 , (iii) A =
,
1 2 3 0
0 0
(8) In each part find nullity of T
(a) T : R5 −→ R7 has rank 3.
(b) T : P4 −→ P3 has rank 1.
(c) The range of T : R6 −→ R3 is R3
(d) T : M22 −→ M22 has rank 3.
3.6
Composite Linear Transformation
Definition 3.10. Let T1 : U −→ V and T2 : V −→ W be two linear transformations
then the composite transformation of two transformations T1 and T2 is denoted by
T2 ◦ T1 : U −→ W and is defined as follows
(T2 ◦ T1 )(u) = T2 [T1 (u)], for all u ∈ U.
Theorem 3.9. Let T1 : U −→ V and T2 : V −→ W be two linear transformations
then the composite transformation T2 ◦ T1 : U −→ W is also a linear transformation.
CHAPTER 3. LINEAR TRANSFORMATION
113
Proof. : For u, v ∈ U and k be any scalar, we have
T2 ◦ T1 (u + v) = T2 [T1 (u + v)]
= T2 [T1 (u) + T1 (v)]
= T2 [T1 (u)] + T2 [T1 (v)]
= (T2 ◦ T1 )(u) + (T2 ◦ T1 )(v)
Also, (T2 ◦ T1 )(ku) = T2 [T1 (ku)]
= T2 [kT1 (u)]
= kT2 [T1 (u)]
= k(T2 ◦ T1 )(u)
Therefore, T2 ◦ T1 : U −→ W is a linear transformation.
Example 3.18. Find domain, codomain of T2 ◦ T1 and compute (T2 ◦ T1 )(x, y, z) if
T1 (x, y, z) = (x − y, y + z, x − z), T2 (x, y, z) = (0, x + y + z).
Solution: We have
T1 (x, y, z) = (x − y, y + z, x − z), T2 (x, y, z) = (0, x + y + z).
∴ T1 : R3 −→ R3 and T2 : R3 −→ R2 .
∴ T2 ◦ T1 : R3 −→ R2 .
∴ Domain of T2 ◦ T1 is R3 and codomain of T2 ◦ T1 is R2 .
Also
(T2 ◦ T1 )(x, y, z) = T2 [T1 (x, y, z)]
= T2 (x − y, y + z, x − z)
= (0, x − y + y + z + x − z)
= (0, 2x).
Exercise:3.4
(1) Find domain and codomain of T2 ◦ T1 and find (T2 ◦ T1 )(x, y) if
(a) T1 (x, y) = (2x, 3y), T2 (x, y) = (x − y, x + y).
(b) T1 (x, y) = (x − 3y, 0), T2 (x, y) = (4x − 5y, 3x − 6y).
(c) T1 (x, y) = (2x, −3y, x + y), T2 (x, y, z) = (x − y, y + z).
(d) T1 (x, y, z) = (x − y, y + z, x − z), T2 (x, y, z) = (0, x + y + z).
CHAPTER 3. LINEAR TRANSFORMATION
114
(2) Find domain and codomain of T3 ◦ T2 ◦ T1 and find (T3 ◦ T2 ◦ T1 )(x, y) if
(a) T1 (x, y) = (−2y, 3x, x−2y), T2 (x, y, z) = (y, z, x), T3 (x, y, z) = (x+z, y −z).
(b) T1 (x, y) = (x + y, y, −x), T2 (x, y, z) = (0, x + y + z, 3y),
T3 (x, y, z) = (3x + 2y, 4z − x − 3y).
(3) Let T1 : Pn −→ Pn and T2 : Pn −→ Pn be the linear operators given T1 (p(x)) =
p(x − 1) and T2 (p(x)) = p(x + 1). Find (T1 ◦ T2 )(p(x)) and (T2 ◦ T1 )(p(x)).
(4) Let the linear transformations T1 : P2 −→ P2 and T2 : P2 −→ P3 are defined by
T1 (p(x)) = p(x + 1) and T2 (p(x)) = xp(x). Find (T2 ◦ T1 )(a0 + a1 x + a2 x2 ).
3.7
Inverse of Linear Transformation
Definition 3.11. Let T1 : V −→ W be linear transformation then there exist a
linear transformation T2 : W −→ V such that T2 ◦ T1 = IV , IV is identity in V then
T2 is called left inverse of T1 .
Also, if T1 ◦ T2 = IW , IW is identity in W then T2 is called right inverse of T1 .
If both left as well as right inverses exists then T2 is called inverse of T1 .
Theorem 3.10. A linear transformation T : V −→ W has an inverse if and only
if it is bijective.
Proof. : Step(I) Given: A linear transformation T : V −→ W has an inverse.
Claim: To prove that T : V −→ W is bijective.
Since, linear transformation T : V −→ W has an inverse say T −1 : W −→ V
∴ T ◦ T −1 = IW and T −1 ◦ T = IV .
For u, v ∈ V , we have
T (u) = T (v)
T −1 ◦ T (u) = T −1 [T (u)]
= T −1 [T (v)]
= T −1 ◦ T (v)
T −1 ◦ T (u) = T −1 ◦ T (v)
IV (u) = IV (v)
∴ u = v.
CHAPTER 3. LINEAR TRANSFORMATION
115
∴ T : V −→ W is injective.
Now, for w ∈ W, T −1 (w) = u is in V , We have
(T ◦ T −1 )(w) = w
T [T −1 (w)] = w
T (u) = w.
∴ For w ∈ W there is u ∈ V such that T (u) = w.
∴ T : V −→ W is surjective.
∴ T : V −→ W is bijective.
Step(II) Given: T : V −→ W is bijective.
Claim: To prove a linear transformation T : V −→ W has an inverse.
Since, T : V −→ W is surjective, then for every element w ∈ W , there is an element
u ∈ V such that T (u) = w.
Also, T : V −→ W is injective.
∴ u ∈ V is uniquely determined by w ∈ W .
This correspondence of w in W to u in V can be defined by the transformation
T1 (w) = u.
∴ (T1 ◦ T )(u) = T1 (T (u)) = u.
(T ◦ T1 )(w) = T (T1 (w)) = w.
∴ T1 is an inverse of T.
Therefore, proof of the theorem is completed.
2
Example 3.19. Let T : R2 −→
by A. Determine whether T
( [ R ]be)multiplication
[
]
1 1
x1
−1
, if A =
.
has an inverse; if so find T
x2
1 0
Solution: Since, T : R2 −→ R2 be multiplication by A
([
T
∴ T (X) = AX.
][ ]
]) [
x
1 1
x
=
y
1 0
y
CHAPTER 3. LINEAR TRANSFORMATION
([
T
x
y
116
])
[
=
x+y
x
]
.
∴ T (x, y) = (x + y, x).
To show T is injective i.e. to show ker.(T ) = {0̄} Consider
T (x, y) = T (0, 0)
T (x, y) = (0, 0)
(x + y, x) = (0, 0)
∴ x + y = 0, x = 0
x = 0, y = 0
∴ (x, y) = (0, 0) ⇒ ker.(T ) = {0̄}.
∴ T is injective.
Let (x1 , x2 ) be any vector in R2 , we must find (x, y) ∈ R2 such that
T (x, y) = (x1 , x2 ).
∴ (x + y, x) = (x1 , x2 ).
x + y = x1 and x = x2 .
∴ x = x2 and y = x1 − x2
∴ T is surjective and hence bijective.
∴ T has an inverse.
([
]
]) [
x
x2
1
−1
.
∴ T
=
x1 − x2
x2
Exercise: 3.5
2
(1) Let T : R2 −→
be multiplication
by A. Determine whether T has an inverse;
(R
])
[
x1
, where
if so find T −1
x2
]
[
]
[
]
[
6 −3
4 7
5 2
(b) A =
(c) A =
.
(a) A =
2 1
4 −2
−1 3
CHAPTER 3. LINEAR TRANSFORMATION
117
3
(2) Let T : R3 −→ R
be multiplication by A.Determinewhether T has an inverse;


( x )
1 5 2
1 4 −1
1
if so find T −1  x2  , where (a) A =  1 2 1  (b) A =  1 2
x3
−1 1 0
−1 1




1 0 1
1 −1 1



(c) A = 0 1 1 (d) A = 0 2 −1  .
1 1 0
2 3 0
1 
0
(3) Determine whether T (x, y) = (x + 2y, x − 2y) is invertible or not.
3.8
Matrix of Linear transformation
Definition 3.12. Matrix of Linear transformation: Let V and W be the
vector spaces over a field F with the dimensions n and m respectively. Let B =
{v1 , v2 , ..., vn } and B ′ = {w1 , w2 , ..., wm } be the bases of V and W respectively. Let
T : V → W be a linear transformation, then for each i = 1, 2, ..., n, T (vi ) ∈ W ,
hence is a linear combination of basis vectors of W . Therefore,
T (v1 ) = a11 w1 + a21 w2 + · · · + am1 wm
T (v2 ) = a12 w1 + a22 w2 + · · · + am2 wm
..
.
T (vn ) = a1n w1 + a2n w2 + · · · + amn wm
(3.10)
where aij ∈ R for i = 1, 2, ..., n and j = 1, 2, ..., m.
′
We define a matrix of T with respect to the bases B and B ′ , denoted by [T ]B
B to be
′
the transpose of the matrix of coefficients in the equation (3.10), where [T ]B
B is of
th
order m × n whose i column is the coefficient of T (vi ) when expressed as a linear
combination of elements of B ′ . That is,


a11 a12 · · · a1n

 a
 21 a22 · · · a2n 
B′
[T ]B =  ..
..
..
.. 
 .
.
.
. 
am1 am2 · · · amn
′
Sometimes [T ]B
B is called matrix associated with T or matrix representation representation of T with respect to the bases B and B ′ .
CHAPTER 3. LINEAR TRANSFORMATION
118
Example 3.20. Let T : R2 → R3 be a linear transformation defined by
T (x1 , x2 ) = (x2 , −5x1 + 13x2 , −7x1 + 16x2 ).
′
′
2
Find the matrix [T ]B
B , where B = {u1 , u2 } and B = {v1 , v2 , v3 } are bases of R and
R3 respectively, where u1 = (3, 1), u2 = (5, 2), v1 = (1, 0, −1), v2 = (−1, 2, 2) and
v3 = (0, 1, 2).
Solution: From the formula of T , we have T (u1 ) = (1, −2, −5) and
T (u2 ) = (2, 1, −3).
Suppose,(1, −2, −5) = α(v1 ) + β(v2 ) + γ(v3 )
= α(1, 0, −1) + β(−1, 2, 2) + γ(0, 1, 2)
= (α − β, 2β + γ, −2 + 2β + 2γ)
Therefore,we must have
α−β =1
2β + γ = −2
−α + 2β + 2γ = −5
On solving, we get
α = 1, β = 0 and γ = −2.
∴ T (u1 ) = (1, −2, −5) = 1v1 + 0v2 + (−2)v3
Again,(2, 1, −3) = α′ (v1 ) + β ′ (v2 ) + γ ′ (v3 )
= α′ (1, 0, −1) + β ′ (−1, 2, 2) + γ ′ (0, 1, 2)
= (α′ − β ′ , 2β + γ ′ , −2 + 2β ′ + 2γ ′ )
Therefore,we must have
α′ − β ′ = 2
2β ′ + γ ′ = 1
−α′ + 2β ′ + 2γ ′ = −3
On solving, we get
α′ = 3, β ′ = 1 and γ ′ = −1.
∴ T (u2 ) = (2, 1, −3) = 3v1 + 1v2 + (−1)v3 .
CHAPTER 3. LINEAR TRANSFORMATION
119
Thus,the matrix representation of T with respect to the bases B and B ′ is given by


1 3
′
 0 1 .
[T ]B
B =
−2 −1
Example 3.21. Let T : R3 → R2 be a linear transformation defined by
T (x, y, z) = (x + y + z, y + z).
Find the matrix A of T with respect to standard bases of R3 and R2 .
Solution: The standard basis for R3 is B = {e1 = (1, 0, 0), e2 = (0, 1, 0),
e3 = (0, 0, 1)} and the standard basis for R2 is B ′ = {e′1 = (1, 0), e′2 = (0, 1)}.
By definition of T , we have
T (1, 0, 0) = (1, 0) = 1.(1, 0) + 0.(0, 1)
T (0, 1, 0) = (1, 1) = 1.(1, 0) + 1.(0, 1)
T (0, 0, 1) = (1, 1) = 1.(1, 0) + 1.(0, 1)
From this,the matrix of T with respect to the standard bases B and B ′ is given by
(
)
1 1 1
B′
.
[T ]B = A =
0 1 1
dp
Example 3.22. Let D : P3 → P2 be linear transformation given by D(p) = dx
. Find
the matrix of D relative to the standard bases B1 = {1, x, x2 , x3 } and B2 = {1, x, x2 }
of P3 and P2 respectively.
Solution: We have
D(1) = 0 = 0.1 + 0.x + 0.x2
D(x) = 1 = 1.1 + 0.x + 0.x2
D(x2 ) = 2x = 0.1 + 2.x + 0.x2
D(x3 ) = 3x2 = 0.1 + 0.x + 3.x2
From this,the matrix of D with respect to the standard bases B1 and B2 is given
by

2
[T ]B
B1

0 1 0 0
= A =  0 0 2 0 .
0 0 0 3
CHAPTER 3. LINEAR TRANSFORMATION
120
Example 3.23. Let T : R3 → R2 be a linear transformation defined by
T (x, y, z) = (2x + y − z, 3x − 2y + 4z).
Find the matrix A of T with respect to bases of B1 = {(1, 1, 1), (1, 1, 0), (1, 0, 0)}
and B2 = {(1, 3), (1, 4)} of R3 and R2 respectively.
Solution: Let (a, b) ∈ R2 be such that
(a, b) = c(1, 3)) + d(1, 4)
= (c + d, 3c + 4d)
∴ a = c + d and b = 3c + 4d
Solving for c and d we get,
c = 4a − b and d = b − 3a
∴ (a, b) = (4a − b)(1, 3) + (b − 3a)(1, 4)
Now,
T (x, y, z) = (2x + y − z, 3x − 2y + 4z)
∴ T (1, 1, 1) = (2, 5) = (4 × 2 − 5)(1, 3) + (5 − 3 × 2)(1, 4)
= 3(1, 3) + (−1)(1, 4)
Similarly,
T (1, 1, 0) = (3, 1) = 11(1, 3) + (−8)(1, 4)
and T (1, 0, 0) = (2, 3) = 5(1, 3) + (−3)(1, 4)
From this,the matrix of T with respect to the standard bases B1 and B2 is given by
)
(
3 11 5
B2
.
[T ]B1 = A =
−1 −8 −3
Example 3.24. Let V = M2×2 (R) be the vector space of all 2 × 2 matrices(with real
)
1 1
.
numbers. Let T be the operator on V defined by T (X) = AX,where A =
1 1
Find the matrix of T with respect to the basis
)}
)
(
)
(
)
(
{
(
0 0
0 0
0 1
1 0
for V.
, A4 =
, A3 =
, A2 =
B = A1 =
0 1
1 0
0 0
0 1
CHAPTER 3. LINEAR TRANSFORMATION
121
Solution: By definition of T we have,
(
)(
) (
)
1 1
1 0
1 0
T (A1 ) =
=
1 1
0 1
1 0
(
)
(
)
(
1 0
0 1
0
=1
+0
+1
0 0
0 0
1
(
)(
) (
)
1 1
0 1
0 1
T (A2 ) =
=
1 1
0 0
0 1
(
)
(
)
(
1 0
0 1
0
=0
+1
+0
0 0
0 0
1
(
)(
) (
)
1 1
0 0
1 0
T (A3 ) =
=
1 1
0 1
1 0
(
)
(
)
(
1 0
0 1
0
=1
+0
+1
0 0
0 0
1
(
)(
) (
)
1 1
0 0
0 1
T (A4 ) =
=
1 1
0 1
0 1
(
)
(
)
(
1 0
0 1
0
=0
+1
+0
0 0
0 0
1
Thus, we get the required the matrix of T

1
0

[T ]B
=
B
1
0
0
0
0
0
0
0
0
0
)
(
+0
)
(
+1
)
(
+0
)
(
+1
0 0
0 1
0 0
0 1
0 0
0 1
0 0
0 1
)
)
)
)
with respect to the basis B of V is

0 1 0
1 0 1
.
0 1 0
1 0 1
Example 3.25. Let V = P2 be the vector space of all polynomials of degree less
than or equal 2 with real coefficients.Let T be the operator defined on V for which
T (1) = 1 + x, T (x) = 3 − x2 and T (x2 ) = 4 + 2x − 3x2 . Find the formula for
T (a0 + a1 x + a2 x2 ) and use it to find T (2 − 2x + 3x2 ). Further, find the matrix of T
with respect to the basis B = {2, 1 + x, 1 + x2 } of V .
Solution: By definition of linear transformation,we have
T (a0 + a1 x + a2 x2 ) = a0 T (1) + a1 T (x) + a2 T (x2 )
= a0 (1 + x) + a1 (3 − x2 ) + a2 (4 + 2x − 3x2 )
= (a0 + 3a1 + 4a2 ) + (a0 + 2a2 )x + (−a1 − 3a2 )x2
CHAPTER 3. LINEAR TRANSFORMATION
122
Using this formula, we have
T (2 − 2x + 3x2 ) = {[2 + 3(−2) + (4)(3)] + [2 + (2)(3)]x + [(2 − (3)(3)]x2 }
= 8 + 8x − 7x2
This is the required formula for T .
Now, to find the matrix of T with respect to the given basis B = {2, 1 + x, 1 + x2 }.
From formula of T , we have
T (2) = 2 + 2x = 0 × 2 + 2(1 + x) + 0(1 + x2 )
T (1 + x) = 4 + x − x2
= 2 × 2 + 1(1 + x) + (−1)(1 + x2 )
T (1 + x2 ) = 5 + 3x − 3x2
5
= × 2 + 3(1 + x) + (−3)(1 + x2 )
2
Thus, we get the required the matrix of T with respect to the basis B is


5
0 2
2

.
[T ]B
=
2
1
3
B
0 −1 −3
Theorem 3.11. Let T : V → W be a linear transformation between any two arbitrary vector spaces V and W with bases B1 = {v1 , v2 , · · · , vn } and B2 = {w1 , w2 , · · · , wm }
respectively. Let (v)B1 = (α1 , α2 , · · · , αn ) be coordinate vector of a vector v ∈ V and
(w)B2 = (β1 , β2 , · · · , βm ) be coordinate vector of a vector w ∈ W and A is a matrix
representation of T , all respect to the given bases, then
T (v) = w if and only if A(v)B1 = (w)B2 .
Proof. : By definition of coordinate vector, we have
v = α1 v1 + α2 v2 + · · · + αn vn
w = β1 w1 + β2 w2 + · · · + βm wm
CHAPTER 3. LINEAR TRANSFORMATION
Suppose,
T (v1 ) = a11 w1 + a21 w2 + · · · + am1 wm
T (v2 ) = a12 w1 + a22 w2 + · · · + am2 wm
..
.
T (vn ) = a1n w1 + a2n w2 + · · · + amn wm
From these equations matrix of linear transformation T is


a11 a12 · · · a1n


 a21 a22 · · · a2n 
B′
A = [T ]B =  ..
..
..
.. 
 .
.
.
. 
am1 am2 · · · amn
Now,
w = T (v)
⇔ β1 w1 + β2 w2 · · · + βm wm = T (α1 v1 + α2 v2 + · · · + αn vn )
= α1 T (v1 ) + α2 T (v2 ) + · · · + αn T (vn )
= α1 (a11 w1 + a21 w2 + ... + am1 wm )
+ α2 (a12 w1 + a22 w2 + ... + am2 wm )
+ · · · + αn (a1n w1 + a2n w2 + ... + amn wm )
= (a11 α1 + a12 α2 + · · · + a1n αn )w1
+ (a21 α1 + a22 α2 + · · · + a2n αn )w2
+ · · · + (am1 α1 + am2 α2 + · · · + amn αn )wm
Comparing the coefficients we get the following system of equations
a11 α1 + a12 α2 + · · · + a1n αn = β1
a21 α1 + a22 α2 + · · · + a2n αn = β2
..
.
am1 α1 + am2 α2 + · · · + amn αn = βm
123
CHAPTER 3. LINEAR TRANSFORMATION
This can be written as follows


a11 a12 · · · a1n
 a

 21 a22 · · · a2n  
 ..
..
..
..  
 .
.
.
. 
am1 am2 · · · amn
124
α1
α2
..
.
αn


 
 
=
 
β1
β2
..
.
βm





⇔ A(v)B1 = (w)B2 .
Hence, proof is completed.
Corollary 3.1. Let V and W be n- dimensional vector spaces and T : V → W
be a linear transformation whose matrix representation with respect to the basis
B1 = {v1 , v2 , · · · , vn } and B2 = {w1 , w2 , · · · , wn } is A. Then T is invertible if and
only if A is invertible, where A is a matrix of T with respect to bases B1 and B2 .
Proof. : By theorem (3.11), we have
T (v) = w ⇔ A(v)B1 = (w)B2 .
Now,
T is invertible ⇔ T is one-one and onto.
T is one-one ⇔ ker.T = 0̄.
⇔ T (v) = 0̄ = T (0̄) ⇔ v = 0̄
⇔ A(v)B1 = 0̄ ⇒ (v)B1 = 0̄ (by (3.11))
⇔ N ull(A) = 0̄
⇔ A is one-one.
Again, T is onto
⇔ For any w ∈ W , there exists v ∈ V such that T (v) = w.
⇔ by (3.11), for any (w)B2 ∈ Rn there exists (v)B1 ∈ Rn such that
A(v)B1 = (w)B2 .
∴ A is onto.
From this, we proved that T is one-one and onto ⇔ A is one-one and onto.
This proves, T is invertible ⇔ A is invertible.
(3.11)
CHAPTER 3. LINEAR TRANSFORMATION
3.8.1
125
Matrix of the sum of two linear transformations and a scalar multiple of a linear transformation
Let Vn and Wm be n and m dimensional vector spaces with bases Bv = {v1 , v2 , · · · , vn }
and Bw = {w1 , w2 , · · · , wm } respectively. If TA , TB : Vn → Wm are linear transformations and α ∈ R then sum of linear transformations, TA + TB : Vn → Wm and
scalar multiple of linear transform, αTA : Vn → Wm are linear transformations.
Moreover, if A and B are matrix representations of TA and TB respectively with
respect to bases Bv and Bw then A + B and αA are matrix representations of linear
transformations TA + TB and αTA with respect to bases Bv and Bw .
3.8.2
Matrix of composite linear transformation
Let Up , Vn and Wm be p, n and m dimensional vector spaces with bases Bu =
{u1 , u2 , · · · , up }, Bv = {v1 , v2 , · · · , vn } and Bw = {w1 , w2 , · · · , wm } respectively. If
TA : Up → Vn , TB : Vn → Wm are linear transformations and An×p , Bm×m are their
matrix representations with respect to bases Bu Bv and Bw .
Then TB oTA : Up → Wm is linear transformation having matrix representation
(BA)m×p with respect to bases Bu and Bw .
3.8.3
Matrix of inverse transformation
Let T : Vn → Wn be linear transformation and T −1 : Wn → Vn is inverse transform
of T . If A and B are matrix representations of T and T −1 with respect to bases Bv
and Bw respectively then B = A−1 .
3.9
Change of Basis and Similar Matrices
Sometimes it is imperative to change the basis in representing the linear transformation T , because relative to this new basis, the representation of T may become
very much simplified. Now we will state important result concerning the matrix
representation of a linear transformation when the basis is changed.This aspect has
an important bearing in study of canonical forms of matrices.
CHAPTER 3. LINEAR TRANSFORMATION
126
Theorem 3.12. Let V be n− dimensional vector spaces with bases B1 = {v1 , v2 , · · · , vn }
and B2 = {w1 , w2 , · · · , wn } then there exists a nonsingular matrix
P = [pij ]n×n = [P1 |P2 | · · · |Pn ]
such that wj = p1j v1 + p2j v2 + · · · + pnj vn , j = 1, 2, · · · , n. The nonsingular matrix
P is called as a (coordinate) transformation matrix or transition matrix.
Proof. : Let B1 = {v1 , v2 , · · · , vn } is a basis of n-dimensional vector space V .
Therefore, each vector wj ∈ B2 is uniquely expressed as linear combination of vectors
in B1 as
wj = p1j v1 + p2j v2 + · · · + pnj vn , j = 1, 2, · · · , n.
(3.12)
where pij are scalars.
Define matrix P as



P =


p11 p12 · · · p1n
p21 p22 · · · p2n 

..
..
..
..  = [P1 |P2 | · · · |Pn ]
.
.
.
. 
pn1 pn2 · · · pnn
where column vectors Pj of P is



Pj = 

p1j
p2j
..
.
pnj



.

(3.13)
To show that P is non-singular.
Using equation (3.12) for scalars α1 , α2 , · · · , αn , we have
α1 w1 + α2 w2 + · · · + αn wn = α1 (p11 v1 + p21 v2 + · · · + pn1 vn )
+ α2 (p12 v1 + p22 v2 + · · · + pn2 vn )
+ · · · + αn (p1n v1 + p2n v2 + · · · + pnn vn )
(3.14)
Now, if
α1 P1 + α2 P2 + · · · + αn Pn = 0̄.
Then from equation (3.13), we get
α1 pj1 + α2 pj2 + · · · + αn pjn = 0.
(3.15)
CHAPTER 3. LINEAR TRANSFORMATION
127
From equations (3.14) and (3.15), we get
α1 w1 + α2 w2 + · · · + αn wn = 0̄.
⇒ α1 = α2 = · · · = αn = 0
as B2 = {w1 , w2 , · · · , wn } is basis and hence w1 , w2 , · · · , wn are linearly independent.
Thus,
α1 P1 + α2 P2 + · · · + αn Pn = 0̄
that is the matrix equation



P

α1
α2
..
.
αn



 = 0̄

has only trivial solution
α1 = α2 = · · · = αn = 0.
∴ P is invertible matrix.
∴ P is a nonsingular matrix.
Hence, proof is completed.
Theorem 3.13. Let B1 = {v1 , v2 , · · · , vn } and B2 = {w1 , w2 , · · · , wn } are two bases
of n− dimensional vector spaces V such that
wj = p1j v1 + p2j v2 + .... + pnj vn , j = 1, 2, · · · , n.
(3.16)
where P = [pij ] is nonsingular matrix of order n. If (u)B1 = (α1 , α2 , ..., αn )t and
(u)B2 = (β1 , β2 , ..., βn )t are coordinate vectors of u ∈ V with respect to B1 and B2
respectively, then (u)B2 = P −1 (u)B1 .
Proof. : Let (u)B1 and (u)B2 are co-ordinate vectors of u ∈ V with respect to B1
and B2 , then we have
u = α1 v1 + α2 v2 + · · · + αn vn =
n
∑
αj wj .
(3.17)
j=1
and
u = β1 w1 + β2 w2 + · · · + βn wn =
n
∑
i=1
βi wi .
(3.18)
CHAPTER 3. LINEAR TRANSFORMATION
128
Therefore, from equations (3.16), (3.17) and (3.18), we get
(∑
)
n
n
n
∑
∑
αi vi =
βj
pij vi
i=1
=
⇒ αi =




⇒ 

α1
α2
..
.
αn
n
∑

βj p1j

 j=1
 n
  ∑ βj p2j
 
 =  j=1
 
..

.
 ∑
n

βj pnj

j=1
i=1
(
n
n
∑
∑
)
βj pij vi
i=1
n
∑
j=1
βj pij i = 1, 2, · · · , n.
j=1
 


 
 
=
 




p11 p12 · · · p1n

p21 p22 · · · p2n 

..
..
..
..  
.
.
.
. 
pn1 pn2 · · · pnn
β1
β2
..
.
βn





j=1
∴ (u)B1 = P (u)B2
∴ (u)B2 = P −1 (u)B1 (Since P is invertible).
Therefore, proof is completed.
Theorem 3.14. Let V and W be vector spaces such that dim.V = n, dim.W = m
and B1 = {v1 , v2 , · · · , vn } and B2 = {w1 , w2 , · · · , wm } be the bases of V and W
2
respectively. Let A = [T ]B
B1 be (m × n) matrix of linear transformation T : V → W
with respect to bases B1 and B2 . If  is a m × n matrix of linear transformation
T with respect to different bases B̂ = {v̂1 , v̂2 , ..., v̂n } and B̂1 = {ŵ1 , ŵ2 , ..., ŵm }, then
there exist invertible matrices B and C of order n and m respectively, such that
−1
 = Cm×m
Am×n Bn×n .
Proof. : Suppose A = [aij ]m×n is a matrix of linear transformation T with respect
to bases B1 and B2 .
m
∑
∴ T (vj ) = a1j w1 + a2j w2 + · · · + anj wm =
aij wi (1 ≤ j ≤ n).
(3.19)
i=1
Similarly, for  = [aˆij ], we have
T (v̂j ) = â1j ŵ1 + â2j ŵ2 + · · · + ânj ŵm =
m
∑
i=1
âij ŵi (1 ≤ j ≤ n).
(3.20)
CHAPTER 3. LINEAR TRANSFORMATION
129
From theorem (3.12), there exists a nonsingular co-ordinate transformation matrices
B = [bij ]n×n and C = [cij ]m×m such that
n
∑
v̂j =
bij vi (1 ≤ j ≤ n)
(3.21)
ckj wi (1 ≤ j ≤ m)
(3.22)
i=1
ŵj =
m
∑
k=1
Hence, from above equations, we get
T (v̂j ) =
=
=
m
∑
i=1
m
∑
âij ŵi
(by (3.20))
m
∑
âij
ĉki wk
(by (3.22))
i=1
k=1
(
m
m
∑ ∑
)
cki âij wk
i=1
k=1
= (C Â)wk .
(3.23)
Alternatively, as T is linear, then we have
(∑
)
m
T (v̂j ) = T
bij vi
=
=
=
n
∑
i=1
n
∑
(by (3.21))
i=1
bij T (vi )
bij
(∑
n
)
aik wk
(by (3.19))
i=1
k=1
(
n
n
∑ ∑
)
aki bij wk
k=1
i=1
= (AB)wk
(3.24)
From equations (3.23) and (3.24), we get
C Â = BA
∴ Â = C −1 BA.
Hence, proof is completed.
CHAPTER 3. LINEAR TRANSFORMATION
130
Corollary 3.2. If a linear transformation T : V → W has matrices A and  with
respect to two different bases of V , then there exist a nonsingular matrix B such
that  = B −1 AB.
Definition 3.13. Similar Matrices: Let A and  be two matrices of order n.
Then A is said to be similar to  if there exists a nonsingular matrix B such that
B −1 AB = Â.
Theorem 3.15. The relation of similarity is an equivalence relation in the set of
all n × n matrices over the field F .
Theorem 3.16. Similar matrices have the same determinants.
Example 3.26. If two matrices A and B are similar, then show that A2 and B 2 are
similar. Moreover, if A and B are invertible, then A−1 and B −1 are also similar.
Solution: Since the matrices A and B are similar,by definition of similarity there
exist invertible matrix C such that
A = C −1 BC
(1)
Obviously,
A2 = A.A
= (C −1 BC)(C −1 BC)
(from (1))
= C −1 B(CC −1 )BC
= C −1 BIBC
= C −1 B 2 C
∴ A2 is similar to B 2 .
If A and B are invertible, then from (1), we have
A−1 = (C −1 BC)−1
= C −1 B −1 (C −1 )−1
(from reversal law of inverse of product of matrices)
= C −1 B −1 C
∴ A−1 is similar to B −1 .
CHAPTER 3. LINEAR TRANSFORMATION
131
Example 3.27. If two matrices A and B are similar such that ,at least one of them
is invertible, then show that AB and BA are also similar.
Solution: Suppose the matrix A is invertible,then by definition of invertible matrix,
we have
A−1 (AB)A = A−1 ABA
= IBA
∵ A−1 A = I
= BA
i.e. BA = A−1 (AB)A
∴ BA is similar to AB.
Similarly, we can show that AB is similar to BA, when B is invertible.
Thus,AB and BA are similar, if at least one of them is invertible.
Exercise :3.6
1. Let T : R3 → R2 be a linear transformation defined by
T (x, y, z) = (x + y + z, y + z).
Find the matrix of T with respect to the bases B = {(−1, 0, 2), (0, 1, 1), (3, −1, 0)}
and B ′ = {(−1, 1), (1, 0)} of R3 and R2 respectively.
dp
2. Let D : P3 → P3 be linear operator given by D(p) = dx
. Find the matrix of D
relative to the bases B1 = {1, x, x2 , x3 } and B2 = {1, 1 + x, (1 + x)2 , (1 + x)3 }
of P3 .
3. Let T be linear operator on R3 defined by
T (x, y, z) = (3x + z, −2x + y, −x + 2y + z)
Prove that T is invertible and find T −1 .
(Hint: Linear transformation is invertible iff it is one one and onto.)
4. Let T : R3 → R2 be a linear transformation defined by
T (x, y, z) = (3x + 2y − 4z, x − 5y + 2z).
Find the matrix of T with respect to the bases B = {(1, 1, 1), (1, 1, 0), (1, 0, 0)}
and B ′ = {(1, 3), (2, 5)} of R3 and R2 respectively.
CHAPTER 3. LINEAR TRANSFORMATION
132
Miscelleneous
(1) Find the standard matrix for the linear transformation T : R3 −→ R4 defined
by T (x, y, z) = (3x − 4y + z, x + y − z, y + z, x + 2y + 3z).
(2) Find the standard matrix for the linear transformation T : R3 −→ R3 defined
by T (x, y, z) = (5x − 3y + z, 2z + 4y, 5x + 3y).
(3) Find the standard matrix for the linear transformation T : R3 −→ R2 defined
by T (x, y, z) = (2x + y, 3y − z) and use it to find T (0, 1, −1).
(4) Let T be the reflection in the line y = x in R2 . So, T (x, y) = (y, x)
(a) Write down the standard matrix of T .
(b) Use the standard matrix to compute T (3, 4).
(5) Find the standard matrix for the linear transformation T : R3 −→ R3 defined
by T (x, y, z) = (3x − 2y + z, 2x − 3y, y − 4z) and use it to find T (2, −1, −1).
(6) Let T1 : R2 −→ R2 defined by T1 (x, y) = (x − 2y, 2x + 3y) and T2 : R2 −→ R2
defined by T2 (x, y) = (y, 0). Compute the standard matrix of T2 ◦ T1 .
Download