Uploaded by Khoa Trần

Linear Spaces - Linear Algebra

advertisement
LINEAR SPACES
ASSOC.PROF.DR. TRAN TUAN NAM
HCMC University of Education
April 4, 2024
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definition
Definition
Let V (V ̸= ∅) be a set (the elements are called vectors) and a
numeric field K(K = R or C, the elements are called scalars).
V together with two operations:
i) The addition of vectors on V : For all x, y ∈ V there exists
only one vector x + y ∈ V ,
ii) The scalar multiplication: For all x ∈ V , k ∈ K there exists
only one vector kx ∈ V .
is called a vector space (linear space) over the field K (K-vector
space) if the following axioms are satisfied for all x, y, z ∈ V ,
and k, l ∈ K:
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
1) x + y = y + x (commutative law);
2) (x + y) + z = x + (y + z) (associative law);
3) There is a vector 0, called the zero vector, such that
0 + x = x + 0 = x;
4) Each vector x has a negative, that is, a vector −x such that
x + (−x) = 0;
5) (k + l)x = kx + lx (distributive law);
6) k(x + y) = kx + ky (distributive law);
7) k(lx) = (kl)x;
8) 1x = x.
When K = R (the field of real numbers), V is called a real
vector space. When K = C (the field of complex numbers), V is
called a complex vector space. If it is not necessary to specify
K, we call it a vector space for short.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Examples
Example
a) The n-tuple space K n (n ≥ 1). Let K n be the set of all
n-tuples x = (x1 , x2 , . . . , xn ) of scalars x1 , x2 , . . . , xn in K.
K n together with two operations:
(xi ) + (yi ) = (xi + yi ),
k(xi ) = (kxi )
is a vector space over the field K.
Special case: Rn is a real vector space over R. (n dimensional Euclidean space). C n is a complex vector
space over C.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
c) The set of polynomials with real coefficients R[x] with the
addition of polynomials and the multiplication of
polynomials by a real number is a vector space over the
field of real numbers R. The set of polynomials Rn [x] has
degree ≤ n is a vector space over the field of real numbers
R.
d) The set of continuous real-valued functions on the closed
interval [a; b] with function addition and function
multiplication by a real number is a real vector space.
e) The set Mm×n (R) of m × n matrices over the field of real
numbers R with matrix addition and matrix multiplication
by a real number is a real vector space.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Remark
The zero vector 0 ∈ V is unique.
For all x ∈ V , the negative of x(−x) is unique.
The equation of an unknown x: x + a = b has a unique
solution of x = b + (−a). We denote x = b + (−a) = b − a
and call it the difference of two vectors b and a.
The equality a + c = b + c gives us a = b.
For all x ∈ V, 0.x = 0.
For all k ∈ K, k.0 = 0.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Linear combinations of vectors
Definition
Let v1 , v2 , ..., vk be vectors in a K-vector space V . If
c1 , c2 , . . . , ck ∈ K are any scalars, the vector
c1 v1 + c2 v2 + · · · + ck vk is called a linear combination of
v1 , v2 , . . . , vk .
Example
a) The zero vector 0 is always in a trivial way a linear
combination of any collection v1 , v2 , . . . , vk , namely
0 = 0v1 + 0v2 + · · · + 0vk .
b) Rn , e1 = (1, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0), . . . , en =
(0, . . . , 0, 1). Then for each vector x = (x1 , . . . , xn ) ∈ Rn we
have
x = x1 e1 + · · · + xn en .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
c) R3 , u = (−1, 0, 2), v = (1, 1, 3), w = (0, 2, m). Find m so
that the vector w is a linear combination of u and v.
Solution. w is a linear combination of two vectors u and v
⇐⇒
w
=
k1 u + k2 v
⇐⇒ (0, 2, m) = (−k1 , 0, 2k1 ) + (k2 , k2 , 3k2 )
⇐⇒ (0, 2, m) = (−k1 + k2 , k2 , 2k1 + 3k2 )
The following system of linear equations is consistent:



−k1 + k2 = 0
k2
=2


2k1 + 3k2 = m
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces




−1 1 0
−1 1 0

 2r1 +r3 
 −5r2 +r3
 0 1 2  −−−−→  0 1 2  −−−−−→
2 3 m
0 5 m


0
−1 1


0
1
2


0 0 m − 10
The system of linear equations is consistent
⇐⇒ m − 10 = 0 ⇔ m = 10
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definition
Two sets of vectors {x1 , . . . , xn } and {y1 , . . . , ym } are said to be
equivalent if each vector of one set is a linearly combination of
the other set and conversely.
Remark
The equivalence relation of two sets of vectors is an equivalence
relation.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Linear Independence
Definition
Let V be a K-vector space and let X be a non-empty subset of
V . Then X is said to be linearly dependent if there are distinct
vectors {v1 , v2 , . . . , vk } in X, and scalars {c1 , c2 , . . . , ck }, not all
of them zero, such that c1 v1 + c2 v2 + · · · + ck vk = 0. A subset
which is not linearly dependent is said to be linearly
independent. Thus a set of distinct vectors {v1 , v2 , . . . , vk } is
linearly independent if and only if an equation of the form
c1 v1 + c2 v2 + · · · + ck vk = 0 always implies that
c1 = c2 = · · · = ck = 0.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
a) In R2 with vectors
x1 = (0, −1), x2 = (1, 4), x3 = (2, 3).
{x1 , x2 , x3 } is linearly dependent, since
5x1 + 2x2 − x3 = (0, 0) = 0.
{x1 , x2 } is linearly independent, because if
k 1 x1 + k 2 x2
=0
⇐⇒ (0, −k1 ) + (k2 , 4k2 ) = 0
⇐⇒ ((k2 , −k1 + 4k2 )
=0
k2
=0
⇐⇒
−k1 + 4k2 = 0
⇐⇒
k1 = k2 = 0
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
b) Consider linear independence or linear dependence:

"
#
"
#
"
#


1 −1
−1 0
0 1
M2 (R), u1 =
, u2 =
, u3 =
.
2 0
0 2
−2 5 

Solution:
Assume
1+k
" that k1 u#
"2 u2 + k3 u3# = 0.
"
#
k1 −k1
−k2 0
0
k3
⇐⇒
+
+
=0
2k1
0
0
2k2
−2k3 5k3


k1 − k2
=0


"
#

−k + k
=0
k1 − k2
−k1 + k3
1
3
⇐⇒
= 0 ⇐⇒
2k1 − 2k3 2k2 + 5k3

2k1 − 2k3 = 0



2k + 5k = 0
2
3
⇐⇒ k1 = k2 = k3 = 0
⇒ {u1 , u2 , u3 } is linearly independent.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Property
(1) Let V be a vector space and X, Y non-empty subsets of V
such that X ⊂ Y . The following statements are true: If Y
is linearly independent, then X is also linearly
independent. If X is linearly dependent, then Y is also
linearly dependent.
(2) A set of vectors {α1 , α2 , . . . , αm } is linearly dependent if
and only if:
a) If m = 1, then α1 = 0,
b) If m > 1, then one of its vectors must be a linear
combination of the other vectors.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Proof: (⇒)
a) a) is clear.
b) Suppose that m vectors α1 , α2 , . . . , αm is linearly
dependent. Then there are scalars x1 , x2 , . . . , xm not all of
them zero such that
x1 α1 + x2 α2 + · · · + xi αi + · · · + xm αm = 0. Hence there
exists i such that xi ̸= 0. We have
−xi αi = x1 α1 +x2 α2 +· · ·+xi−1 αi−1 +xi+1 αi+1 +· · ·+xm αm
Therefore
αi =
x2
xi−1
xi+1
xm
x1
αi−1 + −
αi+1 − · · · −
αm
− α1 − α2 − · · · −
xi
xi
xi
xi
xi
So αi is a linear combination of the remaining vectors.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
(⇐):
When m = 1, it is clear.
Let m > 1, assume that there is a vector αi that is a linear
combination of the remaining vectors, this means that
αi = x1 α1 + x2 α2 + · · · + xi−1 αi−1 + xi+1 αi+1 + · · · + xm αm
Then
x1 α1 + x2 α2 + · · · + xi−1 αi−1 − αi + xi+1 αi+1 + · · · + xm αm = 0
Thus, the vectors α1 , α2 , . . . , αm is linearly dependent.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
(3) If the set of vectors {α1 , α2 , . . . , αm } of the vector space V
is linearly independent, then:
Every vector β ∈ V has no more than one linear
combination of {α1 , α2 , . . . , αn };
{α1 , α2 , . . . , αm , β} are linearly dependent if and only if the
vector β is a linear combination of {α1 , α2 , . . . , αm } (that
linear combination is unique).
In the vector space Rn .
Let {a1 , a2 , . . . , am } ⊂ Rn . We make a matrix A consisting of
rows (columns) as vectors. Then
{a1 , a2 , . . . , am } is linearly independent ⇐⇒ r(A) = m.
{a1 , a2 , . . . , am } is linearly dependent ⇐⇒ r(A) < m.
Remark
The set of 2 vectors {a, b} is linearly dependent ⇐⇒ One
vector is a multiple of the other.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
R4 , {a1 = (1, 1, 2, 2), a2 = (2, 3, 6, 6), a3 = (3, 4, 8, 8), a4 =
(5, 7, 14, 14)}.
We have:




−2r1 +r2
1 1 2 2
1 1 2 2
−3r1 +r3




−5r1 +r4
0 1 2 2
2 3 6 6 
−
−
−
−
−
−
−
−
−
→
A=



0 1 2 2
3 4 8 8 
0 2 4 4
5 7 14 14


1 1 2 2
−r2 +r3


−2r2 +r4
0 1 2 2
−−−−−−−−−→ 
 ⇒ r(A) = 2 < 4.
0 0 0 0
0 0 0 0
⇒ {a1 , a2 , a3 , a4 } is linearly dependent. It is clear that {a1 , a2 }
is linearly independent.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
R3 , u = (−1, 0, 2), v = (1, 1, 3), w = (0, 2, m). Find m so that the
vector w is a linear combination of two vectors u and v.
Solution: It is easy to see that {u, v} is linearly independent,
so w is a linear combination of two vectors u and v
⇐⇒ {u, v, w} is linearly dependent.
We have the matrix:


−1 0 2


A= 1 1 3 
0 2 m
{u, v, w} is linearly independent
⇐⇒ r(A)
<3
⇐⇒ det(A) = 0
⇐⇒ m − 10 = 0
⇐⇒
m
= 10
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Rank of a set of vectors
Definition
Let {xi }i∈I be a set of vectors in a vector space V . A subset
{xj }j∈J (J ⊂ I) is called the maximal linearly independent
subset of the given set {xi }i∈I if it is linearly independent and if
any vector xi (i ∈ I \ J) is added to that subset, a linearly
dependent subset is obtained.
Property
If a subset {xj }j∈J (J ⊂ I) of the set {xi }i∈I is maximal
linearly independent, then every vector xi (i ∈ I) is a linear
combination of the subset {xj }j∈J and that linear
combination is unique.
Let {xi }i∈I (I is a finite set) be a finite set of vectors in the
vector space V, and {xj }j∈J (J ⊂ I) be a linearly
independent subset. We can construct the maximal linearly
independent subset of the set {xi }i∈I containing the subset
{xj }j∈J .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
We have the following lemma (Basic lemma of linear
independence (or linear dependence)).
Lemma
Let {x1 , x2 , . . . , xm } and {y1 , y2 , . . . , yn } be sets of vectors in
the vector space V .
If {x1 , x2 , . . . , xm } is linearly independent and every
xi (i = 1, . . . , m) is a linear combination of {y1 , y2 , . . . , yn }, then
m ≤ n.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Proof : We prove the lemma by contradiction.
Assume that m > n. By the hypothesis we have
0 ̸= x1 = λ1 y1 + λ2 y2 + · · · + λn yn ,
where λ1 , λ2 , . . . , λn are not simultaneously zero. Without loss
of generality, assuming λ1 ̸= 0, we get:
y1 =
1
λ2
λn
x1 +
y2 + · · · +
yn ,
λ1
−λ1
−λ1
Since every xi (n < i ≤ m) is a linear combination of
{y1 , y2 , . . . , yn }, so xi is a linear combination of {x1 , y2 , . . . , yn }.
We have x2 = µ1 x1 + µ2 y2 + · · · + µn yn .
Since {x1 , x2 } is linearly independent, {µ2 , . . . , µn } is not
simultaneously zero, say µ2 ̸= 0, then
y2 =
µ1
1
µ3
µn
x1 + x2 +
y3 + · · · +
yn ,
−µ2
µ2
−µ2
−µ2
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Thus xi is a linear combination of {x1 , x2 , y3 . . . , yn }.
Continuing like this, we get xi which is a linear combination of
{x1 , x2 , . . . , xn }(n < i ≤ m). This contradicts the assumption
that {x1 , x2 , . . . , xm } is linearly independent. Therefore m ≤ n.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Theorem and Definition
Let (xi )i∈I (I is a finite set) be a finite set of vectors in a vector
space V . Then the number of elements of all its maximal
linearly independent subsets is equal and is called the rank of
given set of vectors.
We write: rank (xi )i∈I or r(xi )i∈I .
In the space Rn : Let {a1 , a2 , . . . , am } be a set of vectors
in Rn . We make a matrix A consisting of rows (columns)
as vectors. Then
r(A) = r ⇐⇒ r{a1 , a2 , . . . , am } = r
Example
R4 , {a1 = (1, 1, 2, 2), a2 = (2, 3, 6, 6), a3 = (3, 4, 8, 8), a4 =
(5, 7, 14, 14)}.
r(A) = 2 ⇒ r{a1 , a2 , a3 , a4 } = 2 ⇒ {a1 , a2 } is maximal linearly
independent.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definition of subspaces
Definition
Let V be a vector space. A subspace of V is a subset W which
is itself a vector space with respect to the same rules of addition
and scalar multiplication as V .
Theorem
Let W be a non-empty subset of the vector space V (over the
field K). The following statements are equivalent:
i) W is a subspace of V ;
ii) For all x, y ∈ W and k, l ∈ K, kx + ly ∈ W ;
iii) For every x, y ∈ W and every k ∈ K, x + y ∈ W and
kx ∈ W .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
a) V itself is a subspace, for trivial reasons. It is often called
the improper subspace.
b) Zero space 0, written 0 or 0V , is a subspace of V .
c) If C is considered a vector space over R, then R is a
sub-vector space of C.
d) The set of polynomials with real coefficients Rn [x] has
degree ≤ n is a subspace of the vector space of polynomials
with real coefficients R[x] (over the field of real numbers R).
e) The set of 2 × 2 matrices on the field of real numbers R of
the form
"
#
a 0
, a, b ∈ R
0 b
is a subspace of the vector space 2 × 2 matrices.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definition
Let X be any non-empty subset of a vector space V . The set of
all linear combinations of every finite set of vectors of X is
called the linear hull or span of X, denoted by < X > or span
X.
If X = ∅; we define span ∅ =< ∅ >= 0 (the trivial zero vector
space).
When X = {x1 , x2 , . . . , xn }, we write
spanX =< X >=< x1 , x2 , . . . , xn >.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Sum and intersection of subspaces
Theorem and Definition
If X is a non-empty subset of a vector space V , then ⟨X⟩ (span
X) is a subspace of V . We refer to ⟨X⟩ (span X) as the
subspace of V generated (or spanned) by X and X generates
⟨X⟩.
Let {Wi }i∈I \
be subspaces of the vector space V , then their
intersection
Wi is a sub-vector space of V .
i∈I
If X is a non-empty subset of vectors of V , then the intersection
of all subspaces of V containing X is the smallest subspace of V
containing X.
If V = ⟨X⟩, we say the vector space V is generated by X, and if
X is a finite set, we say V is a finitely generated vector space.
If, on the other hand, no finite subset generates V , then V is
said to be infinitely generated.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Remark
There are infinitely generated vector spaces, for example:
V = R[x] is the vector space of polynomials with real
coefficients, then V is infinitely generated.
Definition
Let
}i∈I be subspaces of the vector space V . Linear hull
* {Wi+
[
[
Wi of the set
Wi is called the sum of the subspaces
i∈I
{Wi }i∈I , denoted by
i∈I
X
Wi .
i∈I
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Remark
If W = W1 + W2 + · · · + Wn then W is the set of all vectors
x ∈ V that can be written in the form x = x1 + x2 + · · · + xn
with xi ∈ Wi (i = 1, 2, . . . , n). However, this way of writing
may not be unique.
Definition
Let W1 , W2 , . . . , Wn be subspaces of a vector space V . Then V
is said to be the direct sum of W1 , W2 , . . . , Wn if
W = W1 + W2 + · · · + Wn and every vector x ∈ W can be
written uniquely in the form
x = x1 + x2 + · · · + xn , where xi ∈ Wi (i = 1, 2, . . . , n)
We write
W = W1 ⊕ W2 ⊕ · · · ⊕ Wn
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Remark
The second condition in the definition is equivalent to the
following condition


n
X
Wj ∩ 
Wi  = 0, for all j = 1, 2, . . . , n
j̸=i=1
Example
V = Rn , e1 = (1, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0), . . . , en =
(0, . . . , 0, 1), then V = Rn =< e1 > ⊕ < e2 > ⊕ · · · ⊕ < en >.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Theorem
Let W1 and W2 be subspaces of the vector space V . The sum
W = W1 + W2 is a direct sum if and only if
W1 ∩ W2 = {0}
Proof:
(⇒) Assume that W = W1 + W2 is a direct sum.
Take x ∈ W1 ∩ W2 ⇒ 0 = (−x) + x, −x ∈ W1 , x ∈ W2 .
On the other hand, 0 = 0 + 0, 0 ∈ W1 , 0 ∈ W2 ⇒ x = 0
⇒ W1 ∩ W2 = {0}.
(⇐) Assume that W1 ∩ W2 = {0}.
Let x ∈ W , suppose there are two representations of x:
x = x1 + x2 and x = x′1 + x′2 where x1 , x′1 ∈ W1 and x2 , x′2 ∈ W2 .
Then 0 = (x1 − x′1 ) + (x2 − x′2 ) ⇒ x1 − x′1 = x′2 − x2 ∈ W1 ∩ W2
⇒ x1 = x′1 , x2 = x′2 ⇒ W = W1 + W2 is a direct sum.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Remark
If V = W1 ⊕ W2 , then we say that W2 is a linear
complement of W1 in V .
Let U be a given subspace of a finitely generated vector
space V , then there exists a linear complement W of U (in
V ).
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definition
Definition
Let B be a non-empty subset of a vector space V . Then B is
called a basis of V if both of the following are true:
i) B is linearly independent;
ii) B generates V.
Remark
A basis of V is a maximal linearly independent set of vectors of
V.
Every finitely generated vector space has a finite basis.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Theorem and Definition
Theorem and Definition
Every basis of a (non-zero) finitely generated vector space V
has a finite and equal number of elements, this number is called
the dimension of V , denoted dim(V ) (or dimK (V )). When
dim(V ) = n, we say that V is an n-dimensional vector space.
Proof : Suppose there are 2 bases
B = {e1 , e2 , e3 . . . , en }, B ′ = {e′1 , e′2 , e′3 . . . , e′m }
Since B is linearly independent and each vector of B is a linear
combination of B ′ , by the Basic Lemma we have n ≤ m.
Similarly, m ≤ n. Therefore m = n.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Remark
a) Of course, a zero space does not have a basis; however it is
convenient to define the dimension of a zero space to be 0.
b) If the set of vectors {e1 , e2 , e3 . . . , en } is a basis of V , then
every vector in V is written in a unique way as a linear
combination of vectors {e1 , e2 , e3 . . . , en }. Conversely, if
every vector in V is written in a unique way, it is a linear
combination of the vectors {e1 , e2 , e3 . . . , en } then
{e1 , e2 , e3 . . . , en } is a basis of V .
c) Let V be a finitely generated vector space with positive
dimension n. Then
i) Any set of n linearly independent vectors of V is a basis;
ii) Any set of n vectors that generates V is a basis.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Theorem
Let W1 , W2 be finite dimensional subspaces of the vector space
V, then:
dim(W1 + W2 ) = dimW1 + dimW2 − dim(W1 ∩ W2 ).
Corollary
dim(W1 ⊕ W2 ) = dimW1 + dimW2 .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Theorem
In the space Rn :
The set of n vectors
e1 = (1, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0), . . . , en = (0, . . . , 0, 1)
makes a basis of Rn and is called the standard basis.
Hence dimRn = n.
Let {a1 , a2 , . . . , am } be a set of vectors in Rn . Let W be
the subspace generated by {a1 , a2 , . . . , am }. Make a matrix
A where the rows are vectors. Then:
dimW = r(A) = r{a1 , a2 , . . . , am } and a maximal linearly
independent set of vectors that is a basis of W .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
R4 , let W be the subspace generated by the set of vectors
{a1 = (1, 1, 2, 2), a2 = (2, 3, 6, 6), a3 = (3, 4, 8, 8), a4 = (5, 7, 14, 14)}
Find dimW and a basis
of W .



−2r1 +r2
1
1
2
2
1 1 2 2
−3r1 +r3




−5r1 +r4
0 1 2 2
2 3 6 6 
A=

 −−−−−−−−−→ 
0 1 2 2
3 4 8 8 
0 2 4 4
5 7 14 14


1 1 2 2
−r2 +r3


−2r2 +r4
0 1 2 2
−−−−−−−−−→ 
.
0 0 0 0
0 0 0 0
⇒ dimW = r(A) = 2.
⇒ {a1 , a2 } is a basis. It is also possible to take
{(1, 1, 2, 2), (0, 1, 2, 2)} as a basis.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
R4 , let W be the subspace generated by the set of vectors
{a1 = (1, 3, 1, 2), a2 = (−2, 3, 0, 6), a3 = (3, 4, 1, 2), a4 = (4, 7, 4, 1)}
Find dimW and a basis of W .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Coordinates of a vector relative to a basis
Definition
Let V be a finite dimensional K-vector space and
B = {e1 , e2 , e3 . . . , en } is an (ordered) basis. Then every vector
x ∈ V can be expressed in a unique way in the form
x = x1 e1 + x2 e2 + · · · + xn en
We shall call x1 , x2 , . . . , xn the coordinates of x relative to (with
respect to) the ordered basis B = {e1 , e2 , e3 . . . , en }.
xi (i = 1, 2, . . . , n) is call the ith coordinate of x relative to the
(ordered) basis B = {e1 , e2 , e3 . . . , en }.
 
x1
 
 x2 

We write (x)B = (x1 , x2 , . . . , xn ), or [x]B = 
 ..  (the
.
 
xn
coordinate matrix of x).
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
R3 , U = {u1 = (1, 1, 1), u2 = (−1, 0, 1), u3 = (1, 1, 0)}.
a) Prove that U is a basis of R3 .
b) Find the coordinates of the vector v = (1, 2, 3) relative to
the basis U .
Solution:
1 1 1
a) −1 0 1 = (0 + 1 − 1) − (0 + 1 + 0) = −1 ̸= 0.
1 1 0
⇒ U is linearly independent ⇒ U is a basis of R3 .
b) Assume that (v)U = (v1 , v2 , v3 )
⇒ v = v1 u1 + v2 u2 + v3 u3
⇒ (1, 2, 3) = (v1 , v1 , v1 ) + (−v2 , 0, v2 ) + (v3 , v3 , 0)
⇒ (1, 2, 3) = (v1 − v2 + v3 , v1 + v3 , v1 + v2 )
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces



v1 − v2 + v3
⇒ v1 + v3


v1 + v2



=1
v1
= 2 ⇒ v2


v3
=3
=2
= 1 ⇒ (v)U = (2, 1, 0).
=0
Example
R3 , U = {u1 = (1, 1, −1), u2 = (0, −1, 1), u3 = (1, 0, 1)}.
a) Prove that U is a basis of R3 .
b) Find the coordinates of the vector v = (3, 1, 0) relative to
the basis U .
Exercise: Find the similar examples yourself.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Matrix of the change of basis and formula of the change
of coordinates
Let V be a finite dimensional K-vector space and bases
B = {e1 , e2 , e3 . . . , en }, B ′ = {e′1 , e′2 , e′3 . . . , e′n }. Take the vector
x ∈ V , we have
(x)B = (x1 , x2 , . . . , xn ), (x)B ′ = (x′1 , x′2 , . . . , x′n ).
Assume that
e′1 = c11 e1 + c21 e2 + · · · + cn1 en
e′2 = c12 e1 + c22 e2 + · · · + cn2 en
.................................
e′n = c1n e1 + c2n e2 + · · · + cnn en
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
The matrix

c11

 c21
C=
 ..
 .
c12
c22
..
.
cn1 cn2

. . . c1n

. . . c2n 
.. 
..

.
. 
. . . cnn
is called the matrix of the change of basis from B to B ′ .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Now, we have
x=
n
X
j=1
⇒ xi =
n
X
x′j e′j =
n
X
j=1




n
n
n
X
X
X

cij x′j  ei
x′j 
cij ei  =
i=1
cij x′j (i = 1, 2, . . . , n)
i=1
j=1
(1).
j=1
(1) is called the formula of the change of coordinates.
The matrix form of formula (1):
[x]B = C[x]B ′
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Theorem
If C is the matrix of the change of basis from B to B ′ then:
a) C is non-singular (i.e. detC ̸= 0).
b) The inverse matrix C −1 is a matrix of the change of basis
from B’ to B.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
In R3 with the standard basis
ε = {e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1)}
and the set of vectors
U = {u1 = (1, 1, 1), u2 = (−1, 0, 1), u3 = (1, 1, 0)}.
a) Prove that U is a basis of R3 .
b) Make the matrix of the change of basis and the formula of
the change of coordinates from ε to U .
c) Make the matrix of the change of basis and the formula of
the change of coordinates from U to ε.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
d) Let v = (1, 2, 3) ∈ R3 , find the coordinates of v relative to
the basis U .
e) Prove that ω = {ω1 = (1, 2, 0), ω2 = (0, 1, 2), ω3 = (0, 0, 2)}
is also a basis of R3 , make the matrix of the change of basis
and the formula of the change of coordinates from U to ω.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
a)
b)
ε = {e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1)}




u
=
e
+
e
+
e
1
−1
1
 1
1
2
3


+ e3 ⇒ C = 1 0 1
(∗) u2 = −e1

u3 = e1 + e2
1 1 0
The formula of the change of coordinates from ε to U :
[x]ε = C[x]U


e1 = u1 − u2 − u3
c) (∗∗) e2 = −u1 + u2 + 2u3

e3 = u1
− u3


1 −1 1


0  ⇒ [x]U = C −1 [x]ε
⇒ C −1 = −1 1
−1 2 −1
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
d)

   
1 −1 1
1
2

   
−1
0  2 = 1
[v]U = C [v]ε = −1 1
−1 2 −1 3
0
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
e) We have:


ω1 =
ω2 =

ω3 =


e1
(∗∗) e2

e3
e1 + 2e2
e2 + 2e3 and
+ 2e3
= u1 − u2 − u3
= −u1 + u2 + 2u3
= u1
− u3


ω1 = −u1 + u2 + 3u3
⇒ ω2 = u1 + u2

ω3 = 2u1
− 2u3


−1 1 2


⇒ D =  1 1 0  ⇒ [x]U = D[x]ω .
3 0 −2
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
f) Make the matrix of the change of basis and the formula of
the change of coordinates from ω to U .
g) Let y = (2, −1, 4) ∈ R3 , find the coordinates of y relative to
the basis ω.
Exercise: Find similar examples yourself.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
A homogeneous system of linear equations has the form:


a11 x1 + a12 x2 + · · · + a1n xn = 0



a21 x1 + a22 x2 + · · · + a2n xn = 0
...................................



 am1 x1 + am2 x2 + · · · + amn xn = 0
Each solution c = (c1 , c2 , . . . , cn ) ∈ Rn (as a vector). Let W be
the set of all solutions of the homogeneous system of linear
equations. Then
∀c = (c1 , c2 , . . . , cn ), c′ = (c′1 , c′2 , . . . , c′n ) ∈ W, k ∈ R
⇒ c + c′ ∈ W, kc ∈ W
It follows that W is a subspace of Rn and is called the solution
space of the homogeneous system of linear equations.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Basis, dimension of the solution subspace W.
Let A be the coefficient matrix, then:
r(A) = n ⇒ W = 0, dimW = 0.
r(A) = r < n ⇒ dimW = n − r. The solution of the system
depends on n − r parameters, for n − r sets of parameters
such as: (1, 0, . . . , 0), (0, 1, 0, . . . , 0), . . . , (0, . . . , 0, 1) we get
a basis consisting of n − r vectors of W (also called the
basis set of solutions).
Example
Find the basis, dimension of the solution subspace W :



=0
x1 + 2x2 − x3 + 3x4 − 4x5
2x1 + 4x2 − 2x3 + 7x4 + 5x5 = 0


2x1 + 4x2 − 2x3 + 4x4 − 2x5 = 0
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Solution:



−2r1 +r2
1 2 −1 3 −4
1 2
−2r1 +r3



A = 2 4 −2 7 5  −−−−−−−−−→ 0 0
2 4 −2 4 −2
0 0



1
1
2
−1
3
−4
1
r
2r2 +r3
32 3



−−−−−−−→ 0 0 0 1 13  −−−−−−−→ 0
0 0 0 0 32
0
⇒ r(A) = 3 ⇒ dimW = 5 − 3 = 2.


x1 + 2x2 − x3 + 3x4 − 4x5
x4 + 13x5


x5
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces

−1 3 −4

0
1 13 
0 −2 6

2 −1 3 −4

0 0 1 13 
0 0 0 1



x1





=0
x2
= 0 ⇔ x3


=0
x4




x5
= −2t + l
=t∈R
=l∈R
=0
=0


x1 + 2x2 − x3 + 3x4 − 4x5
x4 + 13x5


x5



x1





=0
x2
= 0 ⇔ x3


=0

x4




x5
Let t = 1, l = 0 and t = 0, l = 1 we have a basis:
{(−2, 1, 0, 0, 0), (1, 0, 1, 0, 0)}.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
= −2t + l
=t∈R
=l∈R
=0
=0
Exercise
Let X, Y be subspaces of R4 :
X= {(x1 , x2 , x3 , x4 ) ∈ R4 |x1 + x2 − 2x3 + 3x4 = 0} 
(

−x1 − x2 + x3 + x4 = 0 
4
Y = (x1 , x2 , x3 , x4 ) ∈ R |
.

x1 + 3x2 − 4x3 + x4 = 0 
Find:
a) dim X ∩ Y .
b) dim (X + Y ).
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Exercise
Let X, Y be subspaces of R4 :


(


2x
+
x
+
2x
+
x
=
0
1
2
3
4
X = (x1 , x2 , x3 , x4 ) ∈ R4 |

3x1 + x2 − x3 + x4
= 0












3x1 + x2 + 2x3 + x4 = 0 
Y = (x1 , x2 , x3 , x4 ) ∈ R4 | 4x1 + x2 − x3 + 2x4 = 0 .







2x1 + 3x2 + x3 + x4 = 0 


Find:
a) dim X, dim Y and a basis of X, Y .
b) dim X ∩ Y , dim (X + Y ).
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Exercise
Let U and V be subspaces of the vector space R4 defined as
follows:
−1, 1, 1), (5, 0, 3, 2) >
 U =< (3, 1, 2, 1), (2,







3x1 − 3x2 + x3 + x4
4
V = (x1 , x2 , x3 , x4 ) ∈ R | 2x1 + x2 − 3x3 + x4





5x1 − 4x2 + 2x3 − 6x4




= 0

=0 .


= 0

Find:
a) Find the dimension and a basis of each subspace U and V .
b) Find the dimensions of the subspaces U + V , U ∩ V .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definition and Examples
Definition
Let X and Y be two sets. A function (mapping or map) f from
X to Y is a rule which associates to each x ∈ X a unique
element f (x) ∈ Y . We write:
f: X → Y
or f : X → Y, x 7→ f (x)
x 7→ f (x)
X is called the domain of f , and Y is called the codomain of f .
If f (x) = y we say that y is the image of x under (by) f (the
value of f at x) and x is a pre-image of y.
The set
f (X) = {y ∈ Y |y = f (x) for some x ∈ X}
is called the image (range) of f .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
a) Given that X = {1, 2, 3, 4}, Y = {a, b, c, d, e}.
The following rule: 1 7→ a, 2 7→ a, 3 7→ c, 4 7→ d defined a
function f : X → Y .
The following rule: 1 7→ a, 2 7→ b, 3 7→ c does not define a
function, because 4 has no associated element.
The following rule: 1 7→ a, 2 7→ b, 3 7→ c, 3 7→ d, 4 7→ e does
not define a function, because 3 has two associated elements.
b) The rule that each element x ∈ R associated with x2 ∈ R2
defines a function f : R → R, x 7→ x2 .
For any set X, the function 1X : X → X, x 7→ x is called the
identity function (We sometimes use the notation idX instead of
1X ).
Two functions f : X → Y and g : X → Y are said to be equal,
denoted by f = g, if f (x) = g(x) for all x ∈ X.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Images and inverse images
Let f : X → Y be a function.
If A ⊂ X, the image of A (by f ) is a subset of Y ,
f (A) = {f (x) ∈ Y |x ∈ A} = {y ∈ Y |∃x ∈ A, f (x) = y}
If B ⊂ X, the inverse image of B (by f ) is a subset of X,
f −1 (B) = {x ∈ X|f (x) ∈ B}
When B = {b}, we write f −1 ({b}) = f −1 (b) and it is called the
inverse image of b by f .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
In the example a) above X = {1, 2, 3, 4}, Y = {a, b, c, d, e}.
The function f : X → Y, 1 7→ a, 2 7→ a, 3 7→ c, 4 7→ d
Let A = {1, 2, 4} ⊂ X, B = {a, b, c, d} ⊂ Y . Then
f (A) = {a, d}, f −1 (B) = {1, 2, 3, 4}.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Some laws: Let f : X → Y be a function. Let A, A1 , A2 be
subsets of X and B, B1 , B2 subsets of Y . Then
A ⊂ f −1 (f (A)), f −1 (f (B)) ⊂ B, (See the example above)
f (A1 ∪ A2 ) = f (A1 ) ∪ f (A2 ),
f (A1 ∩ A2 ) ⊂ f (A1 ) ∩ f (A2 ),
f −1 (B1 ∪ B2 ) = f −1 (B1 ) ∪ f −1 (B2 ),
f −1 (B1 ∩ B2 ) = f −1 (B1 ) ∩ f −1 (B2 ).
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Injective, Surjective, Bijective
Let f : X → Y be a function.
The function f is said to be injective or one-one (1-1) if
f (x1 ) = f (x2 ) implies that x1 = x2 (That is, f sends
distinct elements of X to distinct elements of Y ).
The function f is said to be surjective or onto if for every
y ∈ Y there is an x ∈ X such that f (x) = y (That is, every
element of Y is the image under f of some element of X, it
means that f (X) = Y ).
The function f is said to be bijective a one-one
correspondence if it is both injective and surjective.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
In the example a) above X = {1, 2, 3, 4}, Y = {a, b, c, d, e}.
The function f : X → Y, 1 7→ a, 2 7→ a, 3 7→ c, 4 7→ d
f is neither injective nor surjective.
Example
f : R → R, x 7→ x3
f is injective, since for all x, x′ ∈ R,
f (x) = f (x′ ) ⇒ x3 = (x′ )3
⇒ x = x′ .
f is surjective, since for all y ∈ R,
f (x) = y ⇒ x3 = y
√
⇒ x = 3 y.
Conclusion: f is bijective.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
f : R → R, x 7→ x2
f is not injective, since 2 ̸= −2, f (2) = 4 = f (−2).
f is not surjective, since there is no x ∈ R, x2 = −1.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Composition
Let X, Y, Z be sets and the functions f : X → Y , g : Y → Z.
The function h : X → Z, x 7→ g(f (x)) is said to be the
composition (composite) of f and g, we write h = g ◦ f (or
h = gf ).
Example
f : R → R, x 7→ x3 , g : R → R, y 7→ y + 1
Then
g ◦ f : R → R, x 7→ x3 + 1
f ◦ g : R → R, x 7→ (x + 1)3
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Remark
In general, g ◦ f ̸= f ◦ g.
Exercise
f : R → R, x 7→ 2x + 1, g : R → R, x 7→ x2 − 1
g ◦ f, f ◦ g ?
Some laws. Given the mappings f : X → Y ,
g : Y → Z, h : X → Z. We have:
h ◦ (g ◦ f ) = (h ◦ g) ◦ f .
f ◦ 1X = f, 1Y ◦ f = f .
(g ◦ f )(A) = g(f (A)), for all A ⊂ X.
(g ◦ f )−1 (B) = f −1 (g −1 (B)), for all B ⊂ Z.
If f, g are both injective (surjective), then so, too, is g ◦ f .
If g ◦ f is injective, then so, too, is f .
If g ◦ f is surjective, then so, too, is g.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Restriction and extension
Let f : X → Y be a function and A ⊂ X.
The function g : A → Y, x 7→ g(x) = f (x) is said to be a
restriction of f on A, denoted by f A and f is said to be an
extension of fA to X.
Remark
Let A ⊂ X, the function IA : A → X, x 7→ IA (x) = x is an
injection and it is called a canonical injection. We have
f ◦ IA = f A .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Inverse of function
Let f : X → Y and g : Y → X be a functions.
If g ◦ f = 1X then f is said to be a left inverse of g and g is said
to be a right inverse of f .
If g ◦ f = 1X and f ◦ g = 1Y then f is said to be a two-side
inverse of g (g is said to be a two-side inverse of f ).
Theorem
A function (with non-empty domain) is an injection if it has a
left inverse. A function is a surjection if and only if it has a
right inverse.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definition
Definition
Let V and W be vector spaces over the same field K. A
mapping (function) φ : V → W is said to be a linear
transformation (linear mapping) if the following two conditions
are satisfied for all x, y ∈ V, k ∈ K:
i) φ(x + y) = φ(x) + φ(y)
ii) φ(kx) = kφ(x)
In the case where φ is a linear transformation from V to V , we
say that φ is a linear operator on V .
Remark
The mapping φ : V → W is a linear transformation (linear
mapping) if and only if
φ(kx + ly) = kφ(x) + lφ(y), for all x, y ∈ V, k, l ∈ K.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Corollary
If φ : V → W is a linear transformation (linear mapping), then:
i) φ(0) = 0,
ii) φ(−x) = −φ(x), for all x ∈ V ,


n
n
X
X


iii) φ
k i xi =
ki φ(xi ), for all x1 , . . . , xn ∈ V ,
i=1
i=1
k1 , . . . , kn ∈ K.
Example
a) The mapping 0 : V → W, x 7→ 0 is a linear transformation
(linear mapping), called the zero linear mapping.
b) The mapping idV : V → V, x 7→ x is a linear transformation
(linear mapping), called the identity mapping.
In general, for k ∈ K, the mapping k : V → V, x 7→ kx is a
linear transformation (linear mapping).
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
c) Let a, b ∈ R, the mapping f : R2 → R, (x, y) 7→ ax + by is a
linear transformation (linear mapping).
d) The mapping φ : R2 → R3 , (x, y) → (x, x + y, x − y) is a
linear transformation (linear mapping).
e) V = C[a; b] be a vector space of continuous functions on
the interval [a; b], the following derivative and integral
mappings are linear transformations (linear mappings):
C[a; b] → C[a; b], f 7→ f ′
Z b
C[a; b] → R, f 7→
f (x)dx
a
f) The mappings φ : R2 → R3 , (x, y) → (x + 1, x + y, x − y),
ψ : R2 → R3 , (x, y) → (x, x + y, 2),
h : R2 → R3 , (x, y) → (x, x + y, xy) are not linear
transformations.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Proof.
d) ∀(x, y), (x′ , y ′ ) ∈ R2 , k ∈ R:
φ((x, y) + (x′ , y ′ )) = φ((x + x′ , y + y ′ ))
= (x + x′ , x + x′ + y + y ′ , x + x′ − y − y ′ )
= (x, x + y, x − y) + (x′ , x′ + y ′ , x′ − y ′ )
= φ((x, y)) + φ((x′ , y ′ ))
φ(k(x, y)) = φ((kx, ky))
= (kx, kx + ky, kx − ky)
= k(x, x + y, x − y)
= kφ((x, y))
f) Let (1, 2) and (3, −4) ∈ R2 ,
φ((1, 2) + (3, −4)) = φ(4, −2) = (5, 2, 6)
φ(1, 2) = (2, 3, −1)
φ(3, −4) = (4, −1, 7)
φ(1, 2) + φ(3, −4) = (6, 2, 6) ̸= φ((1, 2) + (3, −4)).
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Some properties of linear transformation (linear
mapping)
a) Composition (composite) of linear mappings:
Let V , V ′ , V ” be vector spaces over the field K and linear
mappings: f : V → V ′ , g : V ′ 7→ V ′′ . The mapping
h : V → V ′′ , x 7→ g f (x)
is a linear mapping and is called the product of the linear
mappings f and g.
We write h = g ◦ f .
b) The image of a linearly dependent set of vectors (by a
linear mapping) is linearly dependent.
Another equivalent statement: If the image of a set of
vectors is linearly independent, then the set of vectors is
linearly independent.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Proof b). Let {u1 , u2 , . . . , um } be a set of vectors in V such
that {f (u1 ), f (u2 ), . . . , f (um )} is linearly independent. Assume
that
k1 u1 + k2 u2 + · · · + km um = 0
⇒ f (k1 u1 + k2 u2 + · · · + km um ) = 0
⇒ k1 f (u1 ) + k2 f (u2 ) + · · · + km f (um ) = 0
⇒ k1 = k2 = · · · = km = 0 ⇒ {u1 , u2 , . . . , um } is linearly
independent.
c) Linear mapping does not increase the rank of a set of
vectors.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Basic Theorem
Theorem
Let E = {e1 , e2 , . . . , en } be a basis of the K-vector space V and
{a1 , a2 , . . . , an } a set of n vectors of K-vector space W . There
is one and only one linear mapping φ : V → W such that
φ(ei ) = ai for all i = 1, 2, . . . , n.
In short: the linear mapping is completely defined by the
image of a basis.
Proof.
With a vector x = x1 e1 + x2 e2 + · · · + xn en ∈ V , we set
φ(x) = x1 a1 + x2 a2 + · · · + xn an ∈ W . Then we have the
mapping φ : V → W . It is easy to check that φ is linear
mapping.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Suppose there is another linear mapping Ψ : V → W such that
Ψ(ei ) = ai for all i = 1, 2, . . . , n. Then, for any vector
x = x1 e1 + x2 e2 + · · · + xn en ∈ V,
Ψ(x) = Ψ(x1 e1 + x2 e2 + · · · + xn en )
= x1 Ψ(e1 ) + x2Ψ(e2 ) + · · · + xn Ψ(en )
= x1 a1 + x2 a2 + · · · + xn an = φ(x)
⇒ Ψ = φ.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
Let f : R3 → R3 be a linear mapping defined as follows:
f (2, 1, 1) = (2, 2, 6), f (1, 2, 1) = (0, 2, 5), f (0, 1, 2) = (1, −1, 3).
Determine the image of the vector x = (x1 , x2 , x3 ) under f .
Solution.



f (2, 1, 1) = (2, 2, 6)
f (1, 2, 1) = (0, 2, 5)


f (0, 1, 2) = (1, −1, 3)



2f (e1 ) + f (e2 ) + f (e3 ) = (2, 2, 6)
⇔ f (e1 ) + 2f (e2 ) + f (e3 ) = (0, 2, 5)


f (e2 ) + 2f (e3 ) = (1, −1, 3)



f (e1 ) = (1, 1, 2)
⇔ f (e2 ) = (−1, 1, 1)


f (e3 ) = (1, −1, 1)
f (x1 , x2 , x3 ) = x1 f (e1 ) + x2 f (e2 ) + x3 f (e3 ) =
(x1 − x2 + x3 , x1 + x2 − x3 , 2x1 + x2 + x3 )
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Another solution: f (x1 , x2 , x3 ) =
(a11 x1 +a12 x2 +a13 x3 , a21 x1 +a22 x2 +a23 x3 , a31 x1 +a32 x2 +a33 x3 )
From the hypothesis we deduce the system of linear equations,
solving the system of linear equations we get:
a11 = 1, a12 = −1,a13 = 1,a21 = 1,a22 = 1,a23 = −1,a31 =
2,a32 = 1,a33 = 1.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Images and inverse images of subspaces
Let φ : V → W be a linear mapping, X a subspace of V , Y a
subspace of W .
φ(X) = φ(x) ∈ W |x ∈ X is called the image of the subspace X.
φ−1 (Y ) = u ∈ V |φ(u) ∈ Y is called the inverse image of the
subspace Y .
Theorem
φ(X) is a subspace of W and φ−1 (Y ) is a subspace of V .
(Exercise)
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Kernel and Image
Let φ : V → W be a linear mapping.
Ker φ = φ−1 (0) = {x ∈ V |φ(x) = 0} ⊂ V is called the
kernel of the linear mapping φ.
Im φ = φ(V ) = {φ(x)|x ∈ V } ⊂ W is called the image of
the linear mapping φ.
dim Im φ is called the rank of the linear mapping φ.
We write: rank(φ).
dim Ker φ is called the nullity of φ We write: def(φ).
Remark
0 ≤rank(φ)≤dim W , 0 ≤def(φ)≤ dim V .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Monomorphism, Epimorphism, Isomorphism
Let φ : V → W be a linear mapping.
The linear mapping φ is said to be monomorphism
(epimorphism, isomorphism) if the mapping φ is injective
(surjective, bijective).
Property
i) The composition of monomorphisms (epimorphisms,
isomorphisms) is a monomorphism (epimorphism,
isomorphism).
ii) If φ is an isomorphism, then φ−1 : W → V is also an
isomorphism and
φ−1 ◦ φ = idV
Assoc.Prof.Dr. Tran Tuan Nam
φ ◦ φ−1 = idW
Linear spaces
Theorem
Let φ : V → W is a linear mapping. The following statements
are equivalent:
i. φ is a monomorphism;
ii. Ker φ = 0;
iii. The image of a linearly independent set of vectors in V (by
φ) is linearly independent in W .
iv. The rank of any finite set of vectors does not change by φ;
v. The dimension of any finitely generated subspace X of V
does not change by φ;
And if V is a finitely generated vector space (dim V < ∞),
then the above statements equivalent to
vi. rank φ = dim V .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Proof.
Diagram: i. ⇒ ii. ⇒ iii. ⇒ iv. ⇒ v. ⇒ i.
In the case V is a finitely generated vector space:
v. ⇒ vi. ⇒ i.
i. ⇒ ii. x ∈ Ker φ, φ(x) = 0 = φ(0) ⇒ x = 0 ⇒ Ker φ = 0.
ii. ⇒ iii. Assume that {u1 , u2 , . . . , um } is a linearly independent set
of vectors in V . Let
k1 φ(u1 ) + k2 φ(u2 ) + · · · + km φ(um ) = 0
⇒ φ(k1 u1 + k2 u2 + · · · + km um ) = 0
⇒ k1 u1 + k2 u2 + · · · + km um = 0
⇒ k1 = k2 = · · · = km = 0 ⇒ {φ(u1 ), φ(u2 ), . . . , φ(um )} is
a linearly independent set of vectors in W .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
iii. ⇒ iv. Let {u1 , u2 , . . . , um } be a set of vectors in V .
By lesson rank{φ(u1 ), φ(u2 ), . . . , φ(um )} ≤
rank{u1 , u2 , . . . , um }.
We only need to prove that rank{u1 , u2 , . . . , um } ≤
rank{φ(u1 ), φ(u2 ), . . . , φ(um )}. Indeed, suppose that is
{ui1 , ui2 , . . . , uik } is a maximal linearly independent subset
of {u1 , u2 , . . . , um }. From iii.,
φ(ui1 ), φ(ui2 ), . . . , φ(uik )
is a linearly independent set of vectors in W .
Therefore
rank{u1 , u2 , . . . , um } ≤ rank{φ(u1 ), φ(u2 ), . . . , φ(um )}.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
iv. ⇒ v. Let U be a basis of X, then X = ⟨U ⟩. It follows that
φ(X) = φ(U ) . Therefore
dim X=rank(U )=rank(φ(U ))=dim φ(X).
v. ⇒ i. Let u, v ∈ V such that φ(u) = φ(v).
We need to prove u = v.
Indeed, we can assume that u ̸= 0 (since if u = v = 0 it is
obvious) ⇒ φ(u) ̸= 0.
We have
rank{u, v} =rank{φ(u), φ(v)} = 1 ⇒ v = ku
φ(u) = φ(ku) ⇒ φ(u − ku) = 0 ⇒ φ((1 − k)u) = 0
⇒ (1 − k)φ(u) = 0 ⇒ 1 − k = 0 ⇒ k = 1 ⇒ u = v.
In the case V is a finitely generated vector space (dim V < ∞):
v. ⇒ vi. rank φ =dim Im φ =dimφ(V )=dimV .
vi. ⇒ i. (Exersise).
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Theorem
Let φ : V → W is a linear mapping. The following statements
are equivalent:
i) φ is an epimorphism;
ii) Im φ = W ;
iii) rank φ =dim W (dimW < ∞);
iv) φ turns any set of generators of V to a set of generators of
W.
Proof. (Exersise).
Corollary
Let φ : V → W is a linear mapping such that dim V = dim
W = n. The following statements are equivalent:
i) φ is a monomorphism;
ii) φ is an epimorphism;
iii) φ is an isomorphism.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Isomorphisms of vector spaces
Two vector spaces U and V over the same field K are said to be
isomorphic if there is an isomorphism f : U → V . If V and W
are isomorphic vector spaces then we denote this fact in
symbols by V ∼
= W.
Theorem
Let V and W be two finite dimensional vector spaces over the
same field K. Then
V ∼
= W ⇔ dim V = dim W .
Proof. (Exercise)
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Matrix of a Linear Mapping
Let E = {e1 , e2 , . . . , en } be the basis of the K-vector space V
and B = {u1 , u2 , . . . , um } the basis of the K-vector space W .
By the basic theorem, each linear mapping φ : V → W is
completely defined by the image of a basis: ai = φ(ei )
(i = 1, 2, . . . , n).
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Assume that
a1 = φ(e1 ) = a11 u1 + a21 u2 + · · · + am1 um
a2 = φ(e2 ) = a12 u1 + a22 u2 + · · · + am2 um
.......................................
an = φ(en ) = a1n u1 + a2n u2 + · · · + amn um

a11

 a21
A=
 ..
 .
a12
a22
..
.
am1 am2

a1n

a2n 
.. 

. 
. . . amn
...
...
..
.
Assoc.Prof.Dr. Tran Tuan Nam
is called the
matrix of φ with
respect to the
bases E and B.
Linear spaces
Remark
a) When V = W , the linear mapping φ : V → V is said to be
the linear operator on V . Now, we only need to use a base
E = {e1 , e2 , . . . , en } of V , then the matrix A of the linear
operator φ is an n × n matrix and is called the matrix of
the linear operator φ with respect to the basis E.
Let End V be the set of all linear operators on V , then the
mapping End V → Mat(n; K) is bijective.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
b) Let A = [aij ] be an m × n matrix over K, it can be
considered as the matrix of the linear mapping K n → K m
with respect to the standard bases of these vector spaces.
Any n × n matrix over K can be considered as a matrix of
the linear operator of K n with respect to the its standard
basis.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
Find the matrix with respect to the standard bases of the linear
mapping φ : R3 → R4 defined by:
φ((x1 , x2 , x3 )) = (x1 + x2 , x1 − x2 , x3 , x1 ).
We have
φ(e1 ) = φ((1, 0, 0)) = (1, 1, 0, 1) = e′1 + e′2 + e′4
φ(e2 ) = φ((0, 1, 0)) = (1, −1, 0, 0) = e′1 − e′2
φ(e3 ) = φ((0, 0, 1)) = (0, 0, 1, 0) = e′3
Then


1 1 0


 1 −1 0 
A=

 0 0 1 
1 0 0
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Coordinate Formula
Let φ : V → W be a linear mapping, E is the basis of V and B
is the basis of W . Let A be the matrix of φ with respect to the
bases E, B. Let x ∈ V , then we have
A[x]E = [φ(x)]B
If E ′ is another basis of V and B ′ is another basis of W . C is
the matrix of the change of basis from E to E ′ , D is the matrix
of the change of basis from B to B ′ . Let A′ be the matrix of φ
with respect to the bases E ′ , B ′ , we have:
A′ = D−1 AC.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Exercise
1. Which of the following mappings is a linear mapping?
a) f : R3 → R2 , (x1 , x2 , x3 ) 7→ (2x1 + x2 , x2 − x3 )
b) g : R3 → R3 , (x1 , x2 , x3 ) 7→ (x1 + x2 , x2 + x3 , x3 + x1 )
c) h : R3 → R3 , (x1 , x2 , x3 ) 7→ (x1 − x2 , x2 + 2, x3 − x1 )
d) k : R3 → R3 , (x1 , x2 , x3 ) 7→ (x1 + x2 , x2 − x3 )
e) φ : Rn [x] → Rn [x], p(x) 7→ p′ (x)
Z 1
f) ψ : Rn [x] → Rn [x], p(x) 7→
p(x)dx + 2p(x)
0
g) ψ : Rn [x] → Rn+1 [x], p(x) 7→ xp(x)
h) ω : M (n, K) → M (n, K), N 7→ AN B
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
(A, B ∈ M (n, K))
2. Find matrices of linear mappings with respect to standard
bases in the following exercises:
1. a, b, e, f, g
Hint: Rn [x]: The vector space of polynomials of degree ≤ n,
the standard basis: 1, x, x2 , . . . , xn (dim Rn [x] = n + 1).
3. Let f : R2 → R2 be a linear mapping defined by
f (1, 2) = (0, 1), f (1, 1) = (1, 0).
a) Let’s define f (x1 , x2 ).
b) Find the matrix of f with respect to the standard bases.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Eigenvalues – Eigenvectors of the matrix
Definition
Let A be an n × n matrix over the field K. λ ∈ K is said to be
an eigenvalue of A if there is a vector x ̸= 0 in K n such that
A[x] = λ[x]. Then x is called the eigenvector corresponding to
the eigenvalue λ.
Note that


x1


 x2 

x = (x1 , x2 , . . . , xn )
[x] =  . 

 .. 
xn
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Method to find eigenvalues, eigenvectors of a matrix
From the definition: A[x] = λ[x] ⇔ (A − λI)[x] = 0.
This is a homogeneous system of linear equations.
Remark
λ is the eigenvalue of A ⇔ λ is the solution of the equation:
det(A − λI) = 0 (The characteristic equation of A).
The corresponding polynomial PA (λ) = det(A − λI) is called
the characteristic polynomial of A.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Method to find eigenvalues, eigenvectors of a matrix:
Step 1. Solve the characteristic equation of A
det(A − λI) = 0 ⇒ eigenvalues λ1 , λ2 , . . .
Step 2. Find the eigenvectors corresponding to the found
eigenvalues
For λ1 : Solve the system of linear equations:
(A − λ1 I)[x] = 0
The non-zero solutions of the system of linear equations are
eigenvectors of A.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
• Eigenspaces. Let E(λ) be the set of all eigenvectors
corresponding to eigenvalues λ and add vector 0. Then E(λ) is
a vector space (and a solution space of the homogeneous linear
system of equations: (A − λI)[x] = 0).
E(λ) is called the eigenspace corresponding to the eigenvalue λ.
A basis of E(λ) is a basic solution of the system of
homogeneous linear equations.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Theorem
If x1 , x2 , . . . , xn are eigenvectors corresponding to distinct
eigenvalues λ1 , λ2 , . . . , λn , then the set of vectors x1 , x2 , . . . , xn
is linearly independent.
Example
Find the eigenvalues and eigenvectors of the matrix


2 1 0


A =  0 1 −1 
0 2 4
Solution:
det(A − λI) = 0 ⇔
2−λ
1
0
0
1 − λ −1
0
2
4−λ
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
=0
det(A − λI) = 0 ⇔
2−λ
1
0
0
1 − λ −1
0
2
4−λ
=0
⇔ (2 − λ)(λ2 − 5λ + 6) = 0
⇔ λ1 = 2, λ2 = 3
• The case λ1 = 2: Let’s solve the system of linear equations:
(A − 2I)[x] = O ⇔





=0

0 1
0
x1
x2



 0 −1 −1   x2  = 0 ⇔ − x2 − x3 = 0


x3
0 2
2
2x2 + 2x3 = 0



x1 = a ∈ R
⇔ x2 = 0


x3 = 0
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
• The case λ1 = 2:





=0

0 1
0
x1
x2



 0 −1 −1   x2  = 0 ⇔ − x2 − x3 = 0


0 2
2
x3
2x2 + 2x3 = 0



x1 = a ∈ R
⇔ x2 = 0


x3 = 0
The eigenvectors corresponding to the eigenvalue λ1 = 2 is:
(a, 0, 0) = a(1, 0, 0), a ∈ R, a ̸= 0
The eigenspace E(2) has a basis {(1, 0, 0)}.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
• The case λ2 = 3: Let’s solve the system of linear equations:
(A − 3I)[x] = O ⇔






−1 1
0
x1
 − x1 + x2 = 0



 0 −2 −1   x2  = 0 ⇔ − 2x2 − x3 = 0


0
2
1
x3
2x2 + x3
= 0.
(
x1 = x2 =b ∈ R
⇔
x3
= − 2b
The eigenvectors corresponding to the eigenvalue λ2 = 3 is:
(b, b, −2b) = b(1, 1, −2), b ∈ R, b ̸= 0
The eigenspace E(3) has a basis {(1, 1, −2)}.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definition
Two n × n matrices A and B are said to be similar if there is a
non-singular n × n matrix P of such that A = P −1 BP .
We write A ∼ B.
Remark
a) A ∼ B ⇒ det(A) = det(B), rank(A) = rank(B).
Indeed,
det(A) = det(P −1 BP ) = det(P −1 )det(B)det(P ) = det(B).
rank(A) = rank(P −1 BP ) ≤ rank(BP ) ≤ rank(B). By a
similar argument we have
rank(B) ≤ rank(A) ⇒ rank(A) = rank(B).
b) A ∼ B ⇔ PA (λ) = PB (λ).
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Eigenvalues – Eigenvectors of Linear Mapping
Definition
Let V be a K-vector space and the linear mapping f : V → V .
λ ∈ K is said to be an eigenvalue of f if there is a non-zero
vector x in V such that f (x) = λx. Then x is called the
eigenvector corresponding to the eigenvalue λ.
Remark
a) λ and x are the eigenvalue and the eigenvector of the linear
transformation f : V → V if and only if λ and x are the
corresponding eigenvalue and eigenvector of a matrix A of
f with respect to some basis.
b) The matrices of f with respect to different bases are
similar.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definition
We wish now to consider the question: when is a square matrix
similar to a diagonal matrix?
Definition
Let A be a square matrix over a field K. Then A is said to be
diagonalizable over K if it is similar to a diagonal matrix D
over K, that is, there is a non-singular matrix P over K such
that P −1 AP = D. One also says that P diagonalizes A.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
The criterion for diagonalizability
Theorem
Let A be an n × n matrix over a field K. Then A is
diagonalizable if and only if A has n linearly independent
eigenvectors in K n . Then the elements lying on the Principle
diagonal of D are the eigenvalues of A and the columns P are
the eigenvectors respectively.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Corollary
If n × n matrix A has n distinct eigenvalues, then A is
diagonalizable.
Example
Diagonalize the following matrix (if possible):


1
3
3


A =  −3 −5 −3 
3
3
1
Solution.
det(A − λI) = 0 ⇔
1−λ
3
3
−3 −5 − λ −3
3
3
1−λ
=0
⇔ −(λ − 1)(λ + 2)2 = 0 ⇔ λ = 1 or λ = −2
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
• When
λ = 1:





=0

0
3
3
x1
x2 + x3



 −3 −6 −3   x2  = 0 ⇔ x1 + 2x2 + x3 = 0


3
3
0
x3
x1 + x2
=0
⇔ x1 = −x2 = x3



x1 = a ∈ R
⇔ x2 = −a


x3 = a
The eigenvectors corresponding to the eigenvalue λ = 1 is:
(a, −a, a) = a(1, −1, 1), a ∈ R, a ̸= 0
The eigenspace E(1) has a basis {(1, −1, 1)}.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
.
• When
λ = −2:  


3
3
3
x1



 −3 −3 −3   x2  = 0 ⇔ x1 + x2 + x3 = 0
3
3
3
x3

.


x
=
a
∈
R
1

⇔ x2 = b ∈ R


x3 = −a − b
The eigenvectors corresponding to the eigenvalue λ = −2 is:
(a, b, −a − b), a, b ∈ R, a2 + b2 ̸= 0
The eigenspace E(−2) has a basis {(1, 0, −1), (0, 1, −1)}.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
It is easy to check that the following subset of vectors are
linearly
independent:(1, −1, 1), (1, 0, −1), (0, 1, −1) (Determinant ̸= 0).
We have:


1 0
0


D =  0 −2 0 
0 0 −2
The corresponding matrix P which diagonalizes A:


1
1
0


1 
P =  −1 0
1 −1 −1
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
More results:


1 0
0


D =  0 −2 0 
0 0 −2


−2 0 0


D =  0 −2 0 
0
0 1

−2 0 0


D= 0 1 0 
0 0 −2

Assoc.Prof.Dr. Tran Tuan Nam


1
1
0


1 
P =  −1 0
1 −1 −1


0
1
1


0 −1 
P = 1
−1 −1 1


1
1
0


P =  0 −1 1 
−1 1 −1
Linear spaces
Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definitions
Definition
Let V be a K-space vector. A mapping φ : V × V → K is said
to be a bilinear form on V if the following conditions are
satisfied:
∀x, x1 , x2 , y, y1 , y2 ∈ V , ∀k ∈ K
i) φ(x1 + x2 , y) = φ(x1 , y) + φ(x2 , y);
ii) φ(kx, y) = kφ(x, y);
iii) φ(x, y1 + y2 ) = φ(x, y1 ) + φ(x, y2 );
iv) φ(x, ky) = kφ(x, y).
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
The bilinear form φ is said to be symmetric if:
φ(x, y) = φ(y, x) for all x, y ∈ V .
Remark
φ
m
X
i=1
ki xi ,
n
X
!
lj yj
j=1
=
m,n
X
ki lj φ(xi , yj )
i=1,j=1
Definition
Let V be K-vector space. A mapping q : V → K is said to be a
quadratic form on V if there is a symmetric bilinear form φ on
V such that:
q(x) = φ(x, x), ∀x ∈ V .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
We say that the quadratic form q is defined by the bilinear form
φ (q associated with φ) and φ is the polar form of the quadratic
form q.
Remark
a) For a given quadratic form q, its polar bilinear form is
unique.
Proof.
q(x + y) = φ(x + y, x + y)
= φ(x, x) + 2φ(x, y) + φ(y, y)
= q(x) + 2φ(x, y) + q(y)
1
⇒ φ(x, y) = (q(x + y) − q(x) − q(y))
2
b) ∀x ∈ V, ∀k ∈ K, q(kx) = k 2 q(x).
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
c) Let V = Rn , then the quadratic form q is determined by
the formula:
n
X
q(x) = q(x1 , x2 , . . . , xn ) =
aij xi xj
i,j=1
For all x = (x1 , x2 , . . . , xn
) ∈ Rn .
Example
V = R3 , the mapping:
φ : R3 × R3 → R, φ(x, y) = x1 y1 + x2 y2 − 2x2 y3 − 2x3 y2
∀x = (x1 , x2 , x3 ), ∀y = (y1 , y2 , y3 ) ∈ R3
is a symmetric bilinear form on R3 . The corresponding
quadratic form is
q : R3 → R, q(x) = q(x1 , x2 , x3 ) = x21 + x22 − 4x2 x3
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
V = C[a, b] is the space of continuous real-valued functions on
the closed interval [a, b]. The formula
Z b
φ(x(t), y(t)) =
x(t)y(t)dt
a
defines a symmetric bilinear form φ on C[a, b].
The corresponding quadratic form q is determined by the
formula:
Z b
q(x(t)) =
x(t)2 dt
a
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Matrices of bilinear forms and quadratic forms.
Let V be a K-vector space, E = {e1 , e2 , . . . , en } a basis.
For all x, y ∈ V ,
(x)E = (x1 , x2 , . . . , xn ) and (y)E = (y1 , y2 , . . . , yn ).
• Let φ : V × V → K be a bilinear form on V . We have
!
n
n
n
X
X
X
yj ej =
xi yj φ(ei , ej )
φ(x, y) = φ
xi ei ,
i=1
j=1
i,j=1
h i
Let aij = φ(ei , ej ) and the matrix A = aij is called the
n
matrix of φ with respect to the basis E.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Then
φ(x, y) =
n
X
aij xi yj = (x)E A[y]E
i,j=1
and is called the coordinate formula of φ with respect to
the basis E.
Remark
φ is symmetric if and only if A is symmetric.
• Let q : V → K be a quadratic form with the symmetric
bilinear form φ : V × V → K is the polar form of q. Let A
be the matrix of φ with respect to the basis E (A is
symmetric). Then we also call A the matrix of the
quadratic form q with respect to the basis E.
The rank of A is called the rank of φ (and q).
We write: rank(φ), rank(q).
φ (and q) are said to be singular or non-singular depending
on whether the matrix A is singular or non-singular.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
We have the coordinate formula of q with respect to the basis E:
q(x) = φ(x, x) = (x)E A[x]E =
n
X
aij xi xj
i,j=1
X
= a11 x21 + a22 x22 + · · · + ann x2n +
2aij xi xj
1≤i<j≤n
The converse: A homogeneous polynomial of the second degree
in n variables
X
q(x) = b11 x21 + b22 x22 + · · · + bnn x2n +
bij xi xj
1≤i<j≤n
defines a quadratic
h iform q on V (q : V → K) with a matrix in
basis E is A = aij where:
n
aii = bii , aij = aji =
Assoc.Prof.Dr. Tran Tuan Nam
bij
(1 ≤ i < j ≤ n)
2
Linear spaces
Remark
When V = K n , the quadratic form q : K n → K is also called
the quadratic form in n variables on K.
Example
The mapping:
q : R3 → R
q(x) = q(x1 , x2 , x3 ) = 2x21 + 3x22 − x23 − 2x1 x2 + 6x1 x3 + x2 x3 .
defines a quadratic form in three real variables.
Here the matrix of q with respect to the standard basis is:


2 −1 3



1 
 −1 3

A=
2 




1
3
−1
2
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Matrix of the change of basis
Let V be a K-vector space, E = {e1 , e2 , . . . , en } and
E ′ = {e′1 , e′2 , . . . , e′n } two bases.
Let φ : V × V → K be a symmetric bilinear form and
q : V → K be the quadratic form defined by φ.
Let A be the matrix of φ (and q) with respect to the basis E,
A′ the matrix of φ (and q) with respect to the basis E ′ , C the
matrix of the change of basis from E to E ′ .
For all vectors x, y ∈ V , we have:
[x]E = C[x]E ′
[y]E = C[y]E ′
⇒ (x)E = (x)E ′ C t .
φ(x, y) = (x)E A[y]E = (x)E ′ C t AC[y]E ′ and
q(x) = (x)E A[x]E = (x)E ′ C t AC[x]E ′ .
⇒ A′ = C t AC.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Canonical forms
Definition
Let V be an n-dimensional K-vector space and the quadratic
form q : V → K. If the coordinate formula of q in the basis E
has the form:
q(x) = a1 x21 + a2 x22 + · · · + an x2n
(possibly zero terms)
we say q has a canonical form and E is called a canonical basis
for q or q-canonical basis.
Then the matrix of q with respect to the basis E has the
diagonal form:


a1 0 . . . 0


 0 a2 . . . 0 

A= .
.. . .
.. 

.
.
. 
.
.

0 0 . . . an
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Theorem
Let V be an n-dimensional vector space and a quadratic form
q : V → K. There is always a q-canonical basis.
Proof.
We use induction on n. n = 1: It’s clear.
Assuming that it holds for n − 1, we prove it true for n.
Let E be the basis of V and the coordinate formula of q with
respect to the basis E has the form:
q(x) = q(x1 , . . . , xn ) = a1 x21 +a2 x22 +· · ·+an x2n +
X
1≤i<j≤n
We consider 2 cases:
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
2aij xi xj
• Case 1: There exists ai ̸= 0, say a1 ̸= 0, we have
!
X a1j
2
x1 xj + q ′ (x2 , . . . , xn )
q(x) = a1 x1 + 2
a1
j̸=1
!2
X a1j
= a1 x1 +
xj
+ q1 (x2 , . . . , xn )
a1
j̸=1
Set
y1 = x1 +
X a1j
xj
a1
j̸=1
y2 = x2 , . . . , yn = xn
Let B = {u1 , u2 , . . . , un } be the basis of V such
(x)B = (y1 , y2 , . . . , yn )
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Set V1 = ⟨u2 , . . . , un ⟩, dim V1 = n − 1, then
q1 (y2 , . . . , yn ) is a quadratic form on V1 , by induction, there
exists a basis {v2 , . . . , vn } of V1 such that q1 has a
canonical form:
q1 (y2 , . . . , yn ) = b2 z22 + · · · + bn zn2
⇒q(x) = a1 z12 + b2 z22 + · · · + bn zn2 , y1 = z1
The q-canonical basis is:
U = {u1 , v2 , . . . , vn }
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
• Case 2: a1 = a2 = · · · = an = 0, there exists aij ̸= 0
Put
xi = yi − yj
xj = yi + yj
xk = yk ∀k ̸= i, j
and return to case 1.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
Reduce the quadratic form to the canonical form:
q : R3 → R, q(x) = q(x1 , x2 , x3 ) = 2x21 − 4x1 x2 + 2x2 x3 .
q(x) = 2(x21 − 2x1 x2 + x22 ) − 2x22 + 2x2 x3
x3 x23
x2
2
2
= 2(x1 − x2 ) − 2 x2 − 2x2 +
+ 3
2
4
2
2
2
x
x3
+ 3
= 2(x1 − x2 )2 − 2 x2 −
2
2
We use the formula of the change of coordinates:



y1 = x 1 − x 2



x3
y2 = x 2 −
2




y3 = x3
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces



y1 = x 1 − x 2



x3
y2 = x 2 −
2




y3 = x3

y3


x1 = y1 + y2 +


2

y3
⇔ x2 = y2 +

2



x = y
3
3
We have the canonical form:
1
q(x) = q(y1 , y2 , y3 ) = 2y12 − 2y22 + y32
2
Find the q-canonical basis:
The original base:
ε = {e1 , e2 , e3 }, e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1)
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
The q-canonical basis:
U = {u1 , u2 , u3 }
The matrix of the change of basis from ε to U is


1
 1 1 2 






C= 0 1 1 

2 




0 0 1
1 1
, ,1
⇒ u1 = (1, 0, 0), u2 = (1, 1, 0), u3 =
2 2
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
Reduce the quadratic form to the canonical form:
q : R3 → R, q(x) = q(x1 , x2 , x3 ) = 2x1 x2 + 4x2 x3
We use the formula of the change of coordinates:



x1 = y1 − y2
x 2 = y1 + y2


x3 = y3
⇒ q(x) = 2(y12 − y22 ) + 4(y1 + y2 )y3
= 2y12 − 2y22 + 4y1 y3 + 4y2 y3
= 2(y12 + 2y1 y3 + y32 ) − 2y22 − 2y32 + 4y2 y3
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
q(x) = 2(y1 + y3 )2 − 2(y22 − 2y2 y3 + y32 )



z1 = y1 + y3
z 2 = y2 − y3


z3 = y3
= 2(y1 + y3 )2 − 2(y2 − y3 )2






y
=
z
−
z
1
3
 1
x1 = z1 − z2 − 2z3
⇒ y2 = z2 + z 3 ⇒ x 2 = z1 + z2




x3 = z3
y3 = z3
q(x) = q(z1 , z2 , z3 ) = 2z12 − 2z22
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Find the q-canonical basis:
The original base:
ε = {e1 , e2 , e3 }, e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1)
The q-canonical basis:
U = {u1 , u2 , u3 }
The matrix of the change of basis from ε to U is


1 −1 −2


0 
C= 1 1
0 0
1
⇒ u1 = (1, 1, 0), u2 = (−1, 1, 0), u3 = (−2, 0, 1)
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Exercise
Reduce the quadratic form to the canonical form and and find
the corresponding canonical basis:
a) q : R3 → R, q(x) = q(x1 , x2 , x3 ) = 2x21 − 4x1 x3 + 4x2 x3
b) q : R3 → R, q(x) = q(x1 , x2 , x3 ) = 4x2 x1 + 2x2 x3
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Normal forms
Definition
Let V be an n-dimensional K-vector space and the quadratic
form q : V → K. If the coordinate formula of q in the base E
has the form:
q(x) = x21 + x22 + · · · + x2s − x2s+1 − · · · − x2r
(0 ≤ s ≤ r ≤ n)
we say q has a normal form and E is called a normal basis for q
or q-normal basis.
Theorem
Let V be an n-dimensional vector space and a quadratic form
q : V → K. There is always a q-normal basis.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Indices of inertia
Theorem (Law of inertia)
The number of positive coefficients and the number of negative
coefficients in the canonical form of the quadratic form q are
invariant quantities.
Symbols
• s is denoted as the number of positive coefficients, called
the positive index of inertia of the quadratic form q.
• t is denoted as the number of negative coefficients, called
the negative index of inertia of/for the quadratic form q.
• The pair (s, t) is called the indices of inertia of/for the
quadratic form q.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definite quadratic forms
Definition
Let V be an n-dimensional R-vector space. The quadratic form
q : V → R is called:
- Positive definite if
q(x) > 0 ∀x ∈ Rn , x ̸= 0;
- Positive semidefinite (never negative) if
q(x) ≥ 0 ∀x ∈ Rn ;
- Negative definite if
q(x) < 0 ∀x ∈ Rn , x ̸= 0;
- Negative semidefinite (never positive) if
q(x) ≤ 0 ∀x ∈ Rn ;
- Indefinite (Isotropic) if it takes on both positive and
negative values.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
Determine if the following quadratic forms are positive definite ,
negative definite, never negative, never positive or indefinite:
q : R3 → R
positive definite
a) q(x) = 2x21 + x22 + 3x23
b) q(x) = x21 + 2x22
never negative
c) q(x) = −2x21 − 3x22 − x23
d) q(x) = −2x21 − 3x23
e) q(x) = 2x21 + x22 − 3x23
negative definite
Assoc.Prof.Dr. Tran Tuan Nam
never positive
indefinite
Linear spaces
Theorem
Let V be an n-dimensional vector space over the field of real
numbers R and the quadratic form
q:V →R
(i) q is positive definite if and only if all n coefficients in the
canonical form are positive.
(ii) q is negative definite if and only if all n coefficients in the
canonical form are negative.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Let q : V → R is a quadratic form where the matrix with
respect to the basis E has the form


a11 a12 . . . a1n


a22 . . . a2n 
 a
A =  21

 ... ... ... ... 
an1 an2 . . . ann
Set
D1 = a11 , D2 =
a11 a12
a21 a22
, . . . , Dn = det(A)
D1 , D2 , , Dn are called principal subdeterminants.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Theorem (Sylvester)
Let V be an n-dimensional vector space over a field of real
numbers R and a quadratic form
q:V →R
(i) q is positive definite if and only if all the principal
subdeterminants of the matrix of q with respect to some
basis are positive.
(ii) q is negative definite if and only if the principal
subdeterminants Dk (k = 1, 2, . . . , n) satisfy: Dk > 0 if k is
even, Dk < 0 if k is odd.
(That is (−1)k Dk > 0, ∀k = 1, 2, . . . , n)
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
Determine the sign of the quadratic form
q : R3 → R
q(x) = 4x21 + 3x22 + 3x23 + 6x1 x2 − 4x1 x3 − 2x2 x3
We have


4
3 −2


3 −1 
A= 3
−2 −1 3
D1 = 4, D2 =
4 3
3 3
= 3, D3 =
4
3 −2
3
3 −1
−2 −1 3
⇒ q(x) is positive definite.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
=5
Example
Determine the sign of the quadratic form
q : R3 → R
q(x) = −x21 − 5x22 − 14x23 + 2x1 x2 + 16x2 x3 − 4x1 x3
We have


−1 1
−2


8 
A =  1 −5
−2 8 −14
D1 = −1, D2 =
−1 1
1 −5
= 4, D3 =
−1 1
−2
1 −5
8
−2 8 −14
⇒ q(x) is negative definite.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
= −4
Exercise
Let q be a quadratic form on R4 defined as follows:
q(x1 , x2 , x3 , x4 ) =(m2 − 2)x21 + (m2 − 1)x22 + (m − 4)x23
+ (2 − m)x24 + 2(m2 − 2)x1 x2 + 2(m − 4)x3 x4
Find all values of m so that q is positive definite?
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definition of Inner product spaces
Definition
Let V be a vector space over the field of real numbers R (real
vector space). The inner product on V is a positive definite
symmetric bilinear
⟨, ⟩ : V × V → R, (x, y) 7→ ⟨x, y⟩
That is, it satisfies the following conditions:
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
∀x, x1 , x2 , y, y1 , y2 ∈ V, ∀k ∈ R
i)
⟨x1 + x2 , y⟩ = ⟨x1 , y⟩ + ⟨x2 , y⟩ ;
⟨kx, y⟩ = k ⟨x, y⟩ ;
ii)
⟨x, y1 + y2 ⟩ = ⟨x, y1 ⟩ + ⟨x, y2 ⟩ ;
⟨x, ky⟩ = k ⟨x, y⟩ ;
iii)
iv)
⟨x, y⟩ = ⟨y, x⟩ ;
⟨x, x⟩ ≥ 0 and
⟨x, x⟩ = 0 ⇔ x = 0.
A real inner product space is a vector space V over R together
with an inner product ⟨⟩ on V .
A finite dimensional real inner product space is called a
Euclidean space.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Remark
*
n
X
i=1
⟨0, x⟩ =+⟨x, 0⟩ = 0
m
n,m
X
X
ai xi ,
bj yj
=
ai bj xi , yj
j=1
i=1,j=1
Example
a) In the vector space Rn , we define an inner product ⟨⟩ on
Rn by the rule
∀x = (x1 , x2 , . . . , xn ), y = (y1 , y2 , . . . , yn ) ∈ Rn
⟨x, y⟩ = x1 y1 + x2 y2 + · · · + xn yn
This inner product will be referred to as the standard inner
product on Rn (It is also known as the scalar/dot product).
Rn is a real inner product space.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
There are other possible inner products for this vector
space; for example, an inner product on R3 is defined by
⟨x, y⟩ = 2x1 y1 + 3x2 y2 + 4x3 y3
b) Define an inner product ⟨⟩ on the vector space C[a, b] by
the rule
∀x(t), y(t) ∈ C[a, b],
Z b
x(t), y(t) =
x(t)y(t)dt.
a
C[a, b] is a real inner product space.
c) Define an inner product on the vector space Rn [x] of all
real polynomials in x of degree less than or equal to n by
the rule
n+1
X
⟨f, g⟩ =
f (xi )g(xi )
i=1
where x1 , x2 , . . . , xn+1 are distinct real numbers.
Rn [x] is a real inner product space.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Norm of a vector
Let V be a real inner product space with an inner product ⟨, ⟩.
The norm of
pthe vector x ∈ V , denoted by ||x||, is determined
by: ||x|| = ⟨x, x⟩.
A vector with norm 1 is called a unit vector.
Remark
a) ||x|| ≥ 0, ||x|| = 0 ⇔ x = 0.
b) ||kx|| = |k|.||x||
c) For a non-zero vector x, we have a unit vector:
1
ex =
x
||x||
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
d) The Cauchy - Schwartz Inequality
| ⟨x, y⟩ | ≤ ||x||.||y||
the equality occurs ⇔ x and y are linearly dependent.
Proof d).
• When x = 0: It is obvious.
• Let x ̸= 0: We have ∀t ∈ R,
⟨tx + y, tx + y⟩ ≥ 0
⇔ ⟨x, x⟩ t2 + 2 ⟨x, y⟩ t + ⟨y, y⟩ ≥ 0
⇔ ||x||2 t2 + 2 ⟨x, y⟩ t + ||y||2 ≥ 0
⇒ ∆′ ≤ 0
⇒ ⟨x, y⟩2 − ||x||2 ||y||2 ≤ 0
⇔ | ⟨x, y⟩ | ≤ ||x||.||y||.
The equality occurs ⇔ tx + y = 0 ⇔ x and y are linearly
dependent.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Some special cases:
• Rn :
|x1 y1 + x2 y2 +p
· · · + xn yn | ≤
p
x21 + x22 + · · · + x2n y12 + y22 + · · · + yn2
• C[a, b]:
Z b
x(t)y(t)dt
≤
a
Z b
x(t)2 dt
!1
2
.
Z b
a
y(t)2 dt
a
e) The triangle inequality:
||x + y|| ≤ ||x|| + ||y|| (Exercise)
It follows: ||x|| − ||y|| ≤ ||x ± y|| ≤ ||x|| + ||y||.
f) The parallelogram Equation:
||x + y||2 + ||x − y||2 = 2 ||x||2 + ||y||2 .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
!1
2
Angle of two vectors
Let V be a real inner product space with an inner product ⟨, ⟩.
The angle of two non-zero vectors x, y ∈ V , denoted (x, y), is
determined by the formula
cos(x, y) =
⟨x, y⟩
||x||.||y||
0 ≤ (x, y) ≤ π
Remark
(ax, by) =
(
(x, y),
ab > 0
π − (x, y), ab < 0
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definition of orthogonal vectors
Let V be an inner product space with an inner product ⟨, ⟩ and
x, y ∈ V . x is orthogonal to y if ⟨x, y⟩ = 0.
We write x ⊥ y.
Property
i. 0 ⊥ x, x ⊥ x ⇔ x = 0;
ii. x ⊥ y ⇔ ||x + y||2 = ||x||2 + ||y||2 (Pythagoras);
π
iii. x ⊥ y ⇔ (x, y) = .
2
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Orthogonal sets and orthonormal sets
Let V be an inner product space with an inner product ⟨, ⟩.
A set of vectors in V is called an orthogonal set provided all its
pairs of distinct vectors are orthogonal.
An orthonormal set is an orthogonal set with the additional
property that all its vectors have a norm of 1.
Remark
Let x1 , x2 , . . . , xm ⊂ V be an orthogonal set in which all vectors
are non-zero. Then we have the orthonormal set:
e1 =
1
1
1
x1 , e 2 =
x2 , . . . , e m =
xm
||x1 ||
||x2 ||
||xm ||
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Remark
The orthogonal set that does not contain the vector 0 is linearly
independent.
Proof.
Let x1 , x2 , . . . , xm ⊂ V be an orthogonal set.
Suppose
k1 x1 + k2 x2 + · · · + km xm = 0
⇒ ⟨k1 x1 + k2 x2 + · · · + km xm , xi ⟩ = 0 ∀i = 1, . . . , m
⇒ ⟨ki xi , xi ⟩ = 0 ⇒ ki ⟨xi , xi ⟩ = 0 ⇒ ki = 0 ∀i = 1, . . . , m
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
The Gram-Schmidt Process
Let V be an inner product space and u1 , u2 , . . . , un ⊂ V a
linearly independent subset.
• Gram-Schmidt orthogonalization
i) Set v1 = u1
ii) Set
v2 = u2 −
⟨u2 , v1 ⟩
v1 ⇒ v2 ⊥ v1
⟨v1 , v1 ⟩
iii) Let n > 1, set:
⟨un , v1 ⟩
⟨un , v2 ⟩
⟨un , vn−1 ⟩
vn = un −
v1 −
v2 − · · · −
vn−1
⟨v1 , v1 ⟩
⟨v2 , v2 ⟩
⟨vn−1 , vn−1 ⟩
⇒ vn ⊥ v1 , vn ⊥ v2 , . . . , vn ⊥ vn−1
We get the orthogonal set: {v1 , v2 , . . . , vn }.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
• Gram-Schmidt orthonormalization
i) Set
v1 = u1 ⇒ w1 =
1
v1 .
||v1 ||
ii) Set
v2 = u2 − ⟨u2 , w1 ⟩ w1 ⇒ w2 =
1
v2
||v2 ||
iii) Let n > 1, set
vn = un − ⟨un , w1 ⟩ w1 − ⟨un , w2 ⟩ w2 − · · · − ⟨un , wn−1 ⟩ wn−1
1
vn
||vn ||
We get the orthonormal set {w1 , w2 , . . . , wn }.
wn =
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
In R3 with the standard inner product. Orthonormalize the set
of vectors (by the Gram-Schmidt Process):
u1 = (1, 1, 1), u2 = (0, 1, 1), u3 = (0, 0, 1).
Solution:
It is easy to check that the given set of vectors is linearly
independent.
v1 = u1 = (1, 1, 1)
1 1 1
1
1
√ ,√ ,√
⇒ w1 =
v1 = √ (1, 1, 1) =
||v1 ||
3 3 3
3
1 1 1
2
√ ,√ ,√
v2 = u2 − ⟨u2 , w1 ⟩ w1 = (0, 1, 1) − √
3 3 3
3
2 1 1
= − , ,
3 3 3
!
√ r
1
3
2 1 1
2 1 1
⇒ w2 =
v2 = √
=
− , ,
−
,√ ,√
||v2 ||
3 3 3
3 6 6
2
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
v3 = u3 − ⟨u3 , w1 ⟩ w1 − ⟨u3 , w2 ⟩ w2
!
r
1 1 1
1
1
2 1 1
√ ,√ ,√
= (0, 0, 1) − √
−√
−
,√ ,√
3 3 3
3 6 6
3
6
1 1 1
1 1 1
, ,
− − , ,
= (0, 0, 1) −
3 3 3
3 6 6
1 1
= 0, − , −
2 2
√
1
1
1
1 1
⇒ w3 =
v3 = 2 0, − , −
= 0, − √ , − √
||v3 ||
2
2
2 2
We get the orthonormal set {w1 , w2 , w3 }.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Exercise
In R3 with the standard inner product. Orthonormalize the sets
of vectors (by the Gram-Schmidt Process):
a) {u1 = (1, 1, 0), u2 = (1, 1, 1), u3 = (1, 0, 0)}
a) {u1 = (1, 0, 1), u2 = (0, 1, 1), u3 = (1, 1, 0)}
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
In the vector space Rn [x] of all real polynomials in x of degree
less than or equal to n with the inner product:
Z 1
⟨f, g⟩ =
f (x)g(x)dx
−1
Orthonormalize the set of vectors {1, x, x2 }?
Solution.
1
1
v 1 = 1 ⇒ w1 =
1 = sZ
1
||1||
12 dx
1
=√
2
−1
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Z 1
1
1
x
√ =x−
v2 = x − x, √
dx
2
2
−1 2
x2
= x−
4
=x
1
1
⇒ w2 =
v2 = sZ
1
||v2 ||
r
x=
x2 dx
3
x
2
−1
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
1
−1
v3 = ⟨u3 , w1 ⟩ w1 − ⟨u3 , w2 ⟩ w2
* r +r
1
1
3
3
√ − x2 ,
x
x
= x2 − x2 , √
2
2
2
2
Z 1
1
1
√ x2 dx √ −
=x −
2
2
−1
2
= x2 −
Z 1r
−1
1
3
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
3
xdx
2
r
3
x
2
1
1
⇒ w3 =
v3 = sZ ||v3 ||
1
1 2
2
dx
x −
3
−1
1
=r
8
45
1
x2 −
3
r
1
2
x −
3
=
45
8
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
1
2
x −
3
Table of Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Table of Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definitions
Definition
Let V be an n-dimensional (real) inner product space. The
basis B = {e1 , e2 , . . . , en } of V is said to be an orthogonal
(orthonormal) basis if B is orthogonal (orthonormal).
Property
i. For a given basis, using the Gram - Schmidt process, we
can construct an orthogonal (orthonormal) basis.
ii. Any orthogonal (orthonormal) set can be added to an
orthogonal (orthonormal) basis.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
iii. Let B = {e1 , e2 , . . . , en } be the orthonormal basis of V , x,
y ∈ V , (x)B = (x1 , x2 , . . . , xn ), (y)B = (y1 , y2 , . . . , yn ). We
have
a) ⟨x, ei ⟩ = xi (i = 1, 2, . . . , n)
b) ⟨x, y⟩ = x1 y1 + x2 y2 + · · · + xn yn
p
c) ||x|| = x21 + · · · + x2n
x1 y1 + · · · + xn yn
p
d) cos(x, y) = p 2
x1 + · · · + x2n y12 + · · · + yn2
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Orthogonal matrix
A square matrix A is said to be orthogonal if
A.At = I (i.e. At = A−1 )
Remark
The matrix of the change of basis from one orthonormal basis to
another orthonormal basis is the orthogonal matrix (Exercise).
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Orthogonal complements
Let V be a real inner product space, X and Y subspaces of V .
Definition
The vector u ∈ V is said to be orthogonal to X if u ⊥ x,
∀x ∈ X. We write u ⊥ X.
X is said to be orthogonal to Y if ∀x ∈ X, ∀y ∈ Y we have
x ⊥ y. We write X ⊥ Y .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Theorem and Definition
Let V be a finite dimensional real inner product vector space
and L a subspace of V . Then every vector x ∈ V can be written
uniquely in the form: x = x′ + y, where x′ ∈ L, y ⊥ L.
The vector x′ is called the orthogonal projection of x on L and
y is called the orthogonal component of x with respect to L.
Proof.
• L = {0} : x = 0 + x (x′ = 0, y = x).
• L ̸= {0}
We need to find x′ ∈ L satisfying the requirement.
Let {e1 , e2 , . . . , em } be a orthonormal basis of L.
Then x′ = x1 e1 + x2 e2 + · · · + xm em .
We have to determine (x1 , . . . , xm )
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Since y = (x–x′ ) ⊥ L, we have x − x′ , ei = 0,
i = 1, 2, . . . , m.
Therefore ⟨x, ei ⟩ = x′ , ei .
⇒ ⟨x, ei ⟩ = ⟨x1 e1 + x2 e2 + · · · + xm em , ei ⟩.
⇒ ⟨x, ei ⟩ = xi .
Thus, there exist only the pair x′ and y = x − x′ that
satisfy the requirement.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Definition
Let V be a finite dimensional real inner product vector space
and L a subspace of V . The set of all vectors of V orthogonal
to L is called the orthogonal complement of L. We write L⊥ .
Remark
It is easy to see that L⊥ is a subspace of V .
Property
Let L, L1 , L2 be subspaces of V . We have:
i) V = L ⊕ L⊥ , and then dim V = dim L + dim L⊥ .
ii) (L⊥ )⊥ = L.
⊥
iii) L1 ⊂ L2 ⇒ L⊥
2 ⊂ L1 .
⊥
iv) (L1 + L2 )⊥ = L⊥
1 ∩ L2 .
⊥
(L1 ∩ L2 )⊥ = L⊥
1 + L2 .
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Orthogonal diagonalization of real symmetric matrices
Definition
Let A be a real symmetric matrix. If there exists an orthogonal
matrix P such that P t AP is a diagonal matrix, then A is said
to be orthogonally diagonalizable and P orthogonally
diagonalizes A.
Remark
a) Every real symmetric matrix can be orthogonally
diagonalized.
b) If A is a real symmetric matrix, then the eigenvectors with
different eigenvalues are orthogonal to each other.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
• The orthogonal diagonalization Process:
Step 1. Solve the characteristic equation of A
det(A − λI)= 0 ⇒ eigenvalues λ1 , λ2 , . . .
Step 2. Find the eigenvectors corresponding to the found
eigenvalues
For λ1: Solve the system of linear equations:
(A − λ1 I)[x] = 0
Find the basis set of solutions, use the Gram-Schmidt
orthogonal process to get the orthonormal set.
Step 3. Make a matrix P by taking the columns as the vectors of
the orthonormal set and deducing the corresponding
diagonal form of A (on the main diagonal are the
eigenvalues, the sort order corresponds to the order of the
vectors).
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Example
Orthogonally diagonalize the real symmetric matrix (Find the
orthogonal matrix P such that P t AP is a diagonal matrix)


2 1 1


A= 1 2 1 
1 1 2
Solution.
Step 1:

2−λ
 1
1

1
1
2−λ
1  = 0 ⇔ (λ − 1)2 (4 − λ) = 0
1
2−λ
⇔ λ = 1, λ = 4
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Step 2:
• When λ = 1: Solve the system of linear equations:

 
1 1 1
x1

 
(A − I)X = 0 ⇔ 1 1 1 x2  = 0 ⇔ x1 + x2 + x3 = 0
1 1 1
x3



x1 = −a − b
⇔ x2 = a


x3 = b
 
 
−1
−1
 
 
We have the basis set of solutions: u1 =  1 , u2 =  0 .
0
1
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Use the Gram-Schmidt orthogonal process to get the
orthonormalset:

 




1



1


−√  


√
−


6 






2




 






 


 1  





1
√
w1 =  √  , w2 = −

6
 2 



 





 





 






2






√
0




6
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
• When λ = 4: Solve the system of linear equations:
 

−2 1
1
x1

 
(A − 4I)X = 0 ⇔  1 −2 1  x2  = 0
1
1 −2 x3



−2x1 + x2 + x3 = 0
⇔ x1 − 2x2 + x3 = 0


x1 + x2 − 2x3 = 0



x1 = c
⇔ x2 = c


x3 = c




  

1 

 
We have the basis set of solutions:
u3 = 1 .



1 


Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
 

1


 √3  



 


 


 
 1  

w3 = 
Orthonormalize {u3 }:
 √3  .


 




 




 




 1  




√



3 














Step 3: Make the matrix P and P t AP


1
1
1
√ 
−√
− √
2
6
3






1

 1
1
1
 , P t AP = 0
√
√
√
−
P =
 2
6
3


0




2
1 

√
√
0
6
3
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
0
1
0

0
0
4
Example
Orthogonally diagonalize the real symmetric matrix (Find the
orthogonal matrix P such that P t AP is a diagonal matrix)


6 −2 −1


A =  −2 6 −1 
−1 −1 5
Solution. (Exercise)
Step 1:

6−λ
 −2
−1
−2
6−λ
−1

−1
−1  = 0 ⇔ (3 − λ)(λ − 8)(λ − 6) = 0
5−λ
⇔ λ = 3, λ = 8, λ = 6
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Exercise
Orthogonally diagonalize the real symmetric matrix (Find the
orthogonal
matrix
that P t AP

 P such 
 is a diagonal
 matrix)
4 1 1
0 0 1
0 0 0






A = 1 4 1
B = 0 1 0
C = 0 2 2
1 1 4
1 0 0
0 2 2


0 0 1


D = 0 1 2
1 2 2


0 0
1


E = 0 −1 1 
1 1 −1
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces


0 0
1


F = 0 1 −1
1 −1 1
Orthogonal operators and symmetric operators
a) Orthogonal operators
Definition
Let V be a real inner product vector space. The linear operator
f : V → V is said to be an orthogonal operator
(transformation) on V if for all x, y ∈ V we have:
f (x), f (y) = ⟨x, y⟩.
That is, it preserves the inner product.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Theorem
Let V be a finite dimensional real inner product vector space
and f : V → V a linear operator. The following statements are
equivalent:
i. f is the orthogonal operator;
ii. f preserves the norm of a vector, that is
∀x ∈ V, ||f (x)|| = ||x||;
iii. The image of a orthonormal basis (by f ) is a orthonormal basis;
iv. The matrix of f with respect to the orthonormal basis is an
orthogonal matrix.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Proof.
i. ⇒ ii. ∀x ∈ V,
||f (x)|| =
q
f (x), f (x) =
p
⟨x, x⟩ = ||x||
ii. ⇒ i. ∀x, y ∈ V,
1
f (x), f (y) = (||f (x) + f (y)||2 − ||f (x)||2 − ||f (y)||2 )
2
1
= (||f (x + y)||2 − ||f (x)||2 − ||f (y)||2 )
2
1
= (||x + y||2 − ||x||2 − ||y||2 )
2
= ⟨x, y⟩
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
i. ⇒ iii. Suppose that {e1 , e2 , . . . , en } is an orthonormal basis of V .
We have:
(
1, i = j
f (ei ), f (ej ) = ei , ej =
0, i ̸= j
Then {f (e1 ), f (e2 ), . . . , f (en )} is also an orthonormal basis
of V .
iii. ⇒ iv. Assume that {e1 , e2 , . . . , en } is a orthonormal basis of V .
By the hypothesis, {f (e1 ), f (e2 ), . . . , f (en )} is also a
orthonormal basis of V .
It is easy to see that A is the matrix of the change of basis
from the orthonormal basis {e1 , e2 , . . . , en } to the
orthonormal basis {f (e1 ), f (e2 ), . . . , f (en )}, so A is a
orthogonal matrix.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
iv. ⇒ i. Suppose that {e1 , e2 , . . . , en } is an orthonormal basis of V
and A is the matrix of f with respect to the basis
{e1 , e2 , . . . , en }. By the hypothesis, A is an orthogonal
matrix.
Let x, y ∈ V , set
 
 
y1
x1
 
 
 y2 
 x2 
 

[x]E = 
 .. , [y]E =  .. 
.
 . 
yn
xn
⇒ ⟨x, y⟩ = x1 y1 + x2 y2 + · · · + xn yn = [x]tE [y]E
Note that:
[f (x)]E = A[x]E
[f (y)]E = A[y]E .
On the other hand, one has:
f (x), f (y) = [f (x)]tE [f (y)]E = [x]tE At A[y]E
= [x]tE I[y]E = [x]tE [y]E = ⟨x, y⟩
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
b) Symmetric operators
Definition
Let V be a real inner product vector space. The linear operator
f : V → V is said to be a symmetric operator (transformation)
on V if for all x, y ∈ V we have:
f (x), y = x, f (y)
Theorem
Let V be a finite dimensional real inner product vector space
and f : V → V a linear operator. Then f is a symmetric
operator if and only if the matrix of f with respect to the
orthonormal basis is the symmetric matrix.
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Contents I
1
VECTOR SPACES
Definition of vector spaces and examples
Linear Independence in Vector Spaces
Subspaces
The bases of a vector space - Coordinates
The solution subspace of a homogeneous system of linear
equations
2
LINEAR TRANSFORMATION
Functions
Definition of Linear transformation
Image and kernel of linear mapping
Matrix of a Linear Mapping
Eigenvalues - Eigenvectors
Diagonalizable matrices
3
QUADRATIC FORMS
Bilinear forms and Quadratic Forms
Matrices of bilinear forms and quadratic forms
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Contents II
Canonical forms of quadratic forms
Indices of inertia & Definite quadratic forms
4
INNER PRODUCT SPACES
Definitions
Orthogonal vectors
Orthogonal basis and orthonormal basis
5
Contents of the Final Exam
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Contents of the Final Exam
1) Find the Dimensions and bases of subspaces generated by a
set of vectors and the solution space of the homogeneous
system of linear equations.
2) Dimensions of Sum and Intersection of subspaces.
3) Determine the Linear mapping: f : V → W ,
monomorphism (Ker f = 0), epimorphism, isomorphism.
4) Matrices of Linear Mapping, dim Im f , dim Ker f .
5) Definite quadratic forms (Positive, Negative).
6) Orthogonally diagonalize the real symmetric matrix.
7) 1 problem in vector spaces, inner product spaces (1 - 1.5
points)
Assoc.Prof.Dr. Tran Tuan Nam
Linear spaces
Download