EE611 Deterministic Systems Vector Spaces and Basis Changes Kevin D. Donohue

advertisement
EE611
Deterministic Systems
Vector Spaces and Basis Changes
Kevin D. Donohue
Electrical and Computer Engineering
University of Kentucky
Matrix Vector Multiplication
Let x be an nx1 (column) vector and y be a 1xn (row) vector:
Dot (inner) Product: yx=c = |x||y|cos(θ) where c is a scalar
(1x1) and θ is angle between y and x
Projection: Projection of y onto x is denoted by (yx)x
= |y|cos(θ) = yx/|x|
Outer Product: xy=A where A is an nxn matrix.
Matrix-Vector Multiplication: Let x be an nx1 vector and A be
an nxn matrix:
[][ ]
a١
a١ x
A x= a ٢ x= a٢ x
⋮
⋮
aN
aN x
x ' A=x ' [ a١ a٢ ... aN ]=[ x 'a١ x ' a٢ ... x 'a N ]
where ' denotes transpose, and vectors ai denote a row vector
partition in the first expression and a column vector partition in
the second expression.
Linear Independence
Consider an n-dimensional real linear space ℜn. A set of vectors
{x1, x2, ... xm}∈ℜn are linearly dependent (l.d.) iff ∃ a set of real
numbers {α1,α2, ... αm} not identically equal to 0 ∋
١ x ١٢ x ٢... m x m =٠
Otherwise the vectors are linearly independent (l.i.)
Show that if a set of vectors are l.d., then at least one of the
vectors can be expressed as a linear combination of the others.
The dimension of the linear space is the maximal number of l.i.
vectors in the space.
Basis and Representation
A set of l.i. vectors in ℜn is a basis iff every vector in ℜn can be
expressed as a unique linear combination of these vectors.
Given a basis for ℜn {q1, q2, ..., qn}, then every vector in ℜn
can be expressed as:
x=١ q ١ ٢ q ٢...n qn =[ q ١ q ٢ ... qn ]
where 
 is called the representation of x
[]
١
٢ =Q 

⋮
n
Example
Find the representation of noted point with orthonormal basis Q in
terms of basis P.
[]
?
?
[ ]
−٠.٥
١.٥
[]
q ١= ١
٠
[]
q ٢= ٠
١
[]
p١=
١
٠
[]
١
٢
p ٢= 
١
٢
Norms
The generalization of magnitude or length is given by a metric
referred to as a norm. Any real valued function qualifies as a norm
provided it satisfies:
∥x∥≥٠ ∀ x and ∥x∥=٠ iff x=٠
Non-negative
∥ x∥=∣∣ ∥x∥ for any real scalar 
Scalable Consistency
∥x١x٢∥≤∥x ١∥∥x٢∥
∀ x١ , x ٢
Triangular Inequality
Popular Norms
Given x=[ x ١ x ٢ ... x n ] '
n
The 1-norm is defined by
∥x∥١ :=∑ ∣xi∣
i=١
The 2-norm (Euclidean norm)
∥x∥٢ :=
 
٢
n
٢
∑  xi  = x ' x 
i=١
The infinite-norm
∥x∥∞ :=max i∣xi∣
Why do you think this is called the infinity norm? Hint: What
would a 3-norm, … 100-norm look like?
Orthonormal
Vector x is normalized, iff its Euclidean norm is 1
(self dot product is 1).
Vectors xi and xj are orthogonal iff their dot product is 0.
A set of (column) vectors {x1, x2, ... xm} are orthonormal iff
{
٠ if i≠ j
x i ' x j=
∀i , j
١ if i= j
Orthonormalization
Given a set of l.i. vectors {e1, e2, ... en}, the Schmidt
orthonormalization procedure can be used to derive an orthonormal
set of vectors {q1, q2, ... qn} forming a basis for the same linear
space:
Project and subtract
Normalize
u١ :=e١
q ١ :=
١
u
∥u ١∥ ١
u٢ :=e٢−q١ ' e ٢ q١
q٢ :=
١
u
∥u ٢∥ ٢
u٣ :=e٣−q١ ' e ٣ q١−q٢ ' e٣ q٢
q٣ :=
١
u
∥u٣∥ ٣
 
 
 
................................................
n−١
un :=en −∑ qk ' en qk
k =١
 
qn :=
١
u
∥un∥ n
Linear Algebraic Equations
Consider a set of m linear equations with n unknowns:
y ١=a ١١ x ١a ١٢ x ٢... a ١n x n
y ٢=a ٢١ x ١a ٢٢ x ٢... a ٢n x n
...............................
y m=a m١ x١a m٢ x ٢... a mn x n
y=A x
Range space of A is the set all vectors resulting from all possible
linear combinations of the columns of A.
y=a١ x ١a ٢ x ٢... a n x n
a i =[ a ١i a ٢i ... a ٣i ] '
The rank of coefficient matrix A ( rank(A) ) is equal to its number
of l.i columns (or rows). rank(A) is also denoted as ρ(A)
If rank(A) = n, a unique solution x exists given any y
➢
If rank(A) ≤ m < n, many solutions x exist given any y (underdetermined)
➢
If rank(A) ≤ n < m no solutions x may exist for some y (overdetermined)
➢
Nullity
The vector x is a null vector of A iff Ax=0
The null space of A is the set of all null vectors.
The nullity of A is the maximum number of l.i. vectors in its null
space (i.e. dimension of null space).
nullity (A) = n - ρ(A)
Conditions for Solution Existence
Given mxn matrix A and mx1 vector y, an nx1 solution vector x
exists for y=Ax iff y is in range space of A.
A=  [A⋮y]  ⇔ ∃x such that A x=y
Given matrix A, a solution vector x exists for y=Ax, ∀ y iff A is
full row rank (ρ(A) = m).
∃x such that A x=y ∀ y ⇔ A=m
Conditions for Unique Solution
Given mxn matrix A and mx1 vector y, let xp be a solution for
y=Ax. If ρ(A) = n (nullity k= 0), then xp is unique, and if nullity k
> 0 then for any set of real αi's, x given below is a solution.
x=x p ١ n١٢ n٢...  k nk
where vector set {n1, n2, ... nk} is a basis for the null space.
The above solution is also referred to as a parameterization of all
solutions.
Singular Matrix
A square matrix is singular if its determinant is 0.
Given nxn non-singular matrix A, then for every y, a unique
solution for y=Ax exists and is given by A-1y=x.
The homogeneous equation 0=Ax has a non zero solution iff A is
singular, otherwise x=0 is the only solution.
Change of Basis
Denote a representation of x with respect to (wrt) basis
{e١, e٢, ... en } as  , and representation wrt {e١, e٢, ... en }
As  . Note that basis vectors e i are assumed wrt the orthonormal
basis.
Find a change of basis transformation such that

=P 

=Q 
Show that
−١
−١
 E
P=Q = E
where
E= [ e ١, e ٢, ... e n ] and
 = [ e١, e٢, ... en ]
E
Similarity Transformation
Consider nxn matrix A as a linear operator that maps ℜn into itself.
The vector representations are wrt {e١, e ٢, ... en }. Determine the
new representation of the linear operator wrt basis {e١, e٢, ... en }
Show that:
 =P A Q
A
where
−١
−١
 E
P=Q = E
with
E= [ e ١, e ٢, ... e n ]
and
 =[ e١, e٢, ... en ]
E
The operation that changes the basis of the linear operator using a
pre and post multiplication of a matrix and its inverse is referred as
a similarity transform.
Download