Chapter 1 Introduction

advertisement
Chapter 1 Introduction
Empirical methods ---> Analytic ones.
1. Empirical methods:
 Adjust some parameters, add a new compensator, … Strongly depends on experience. An art.
For masters.
 Trial and error (low efficiency)
 When the complexity is high, it doesn’t work or is too dangerous, too expensive (airplane,
aircraft).
2. Analytic methods (for ordinary people)
Basic knowledge
(common sense)
Good job
Mature theoretic/
practical results
(common sense)
There are 5 steps
 Modelling. We are only interested in certain properties, and ignore others.
We should model properly.
 Mathematical description.
Pysical laws - mathematical equations. Tradeoff: precision v.s. complexity
This mathematical model is frequently referred to as “model”, “system”.
 Analysis based on the mathematical model (equations): quantitative and qualitative
(1) Model verification: Under the same inputs, does the model yield the same outputs as
the real system? If not, the model is wrong. Redo it. It should be the first to do.
(2) Under a “trustable” model, “predict” the system’s properties, including the
qualitative ones, like stability, and the quantitative ones, such as the rising time. If
the predicted properties are not acceptable, we have to do something (synthesis).
 Synthesis based on the mathematical model.
Modify some parameters of the (mathematical) “system”. Insert some compensators. We
want to the “system” to satisfy some requirements (stability, robustness), and optimize certain
performance.
 Implement the obtained results into the real system.
(1) Good: The qualitative properties may still be preserved. Quantitative properties may
keep the trend (e.g., the new design could yield some performance improvement).
The exact improvement value may not be preserved. But this is already good
enough. Don’t forget that the mathematical model is only an approximation of the
real system.
(2) Bad: The expected qualitative properties are broken (e.g., loss of stability). No
1
quantitative improvement due to the new design is seen. Some possible reasons to
this failure:
1) Computation errors. Redo the synthesis.
2) Model mismatching. Search for more precise parameters of the system or find
a more general model.
In our class, we focus on the analysis and synthesis of the (mathematical) “system” (model).
Moreover, we only consider the linear models. By system, we mean the set of relevant equations.
2
Chapter 2. Mathematical Descriptions of Systems
2.1 Introduction
A system is a mapping (From the input to the output)
Input
Output
System
y (t ) / y[ k ]
u (t ) / u[k ]
Input
Input
1. It is a deterministic mapping. Existence and Uniqueness.
2. SISO, MIMO, SIMO:
dim(u)=1? dim(y)=1?
2.1.1 Causality and lumpedness
1. Memory v.s. memoryless (systems)
(1) A memoryless system
+
yt 
ut 
1
-
yt   ut  , the current output depends only on the current input.
(2) A system with memory
+
ut 
yt 
1F
-
yt    u d . Both the current input and the past inputs determines the current output.
t

The history makes difference.
3
2. Causal/ non-anticipatory
A causal system: The current output yt  is determined by only the current input ut  and the
past inputs
u ,    t.
The future inputs
u ,  thave
nothing to do with the
current output yt  , i.e., we cannot anticipate the future input from the current output.
Physical systems are all causal.
Some artificial systems can be non-causal (when the time is represented by other variables).
t
The output is selected as the electronic field E(t).
3. State (of a causal system)
y(t) is determined by
u , t 0  
 t
+ u ,    t 0 
xu t 0 
Define a state xu t 0   u ,    t .
xu t 0 

 yt ,
u , t 0    t
t , t 0 . The state plus the future inputs determines all future
outputs.
dim xu t 0    . It doesn’t help by defining a new set variable. Have a look at an example.
+
ut 
yt 
1F
-
y t    u  d   u  d   u  d
t
t0
t


t0
4
x t 0    u d .
t0

x t 0 

 yt ,
u , t 0    t
t , t 0 . x t 0  is a state. dim x t 0   1
Try your best to find as low state dimension as possible.
4. Lumpedness
A state xt 0  :
xt 0 

 yt ,
u , t 0    t
t , t 0
dim xt 0  is finite
the system is lumped
Example: Is the system yt   ut  1 lumped?
Suppose it were lumped.
xt 0 

 yt ,
u , t 0    t
t , t 0
  0,1, yt 0     ut 0    1 . It is obvious that
ut 0    1  u , t0    t
ut 0    1  xt 0 
ut 0    1,   0,1 xt 0 
dim xt 0   
It is not lumped.
Not lumped - distributed.
2.2 Linear systems
1. Linearity
u1 (t )
u 2 (t )
System
System
y1 (t )
y2 (t )
5
(1) Additivity
u3 (t )  u1 (t )  u 2 (t )
System
y3 (t )
Additivity: y3 (t )  y1 (t )  y 2 (t )
(2) homogenity
u4 (t )  u1 (t )
System
y4 (t )
Homogenity: y4 (t )  y1 (t )
Linearity=additivity + homogenity
(3) Linearity defined as “state + partial inputs”
y (t )
u (t )
System
u(t ), t  t 0 
y (t )
System
u(t ), t  t 0 
xt 0 
y (t )
System
u(t ), t  t 0 
6
xt 0   f u (t ), t  t 0  ( f  is actually a linear function).
1 x1 t 0    2 x2 t 0 
y (t )
System
1u1 (t )   2 u 2 (t ),


t  t 0

yt   1 y1 t    2 y2 t 
Example: Is yt  
 u  1d
t

linear? Lumped?
2. Input-output description
yi (t )  g  t , t
  t  t i 

System
  t  t i 
(1) Homogenity
uti   g  t , ti 
uti      t  ti 
System
i=1,2,3,…
(2) Additivity
 ut   g t, t 
 ut    t  t 
i

i
i
i
i
System
i=…, -3, -2, -1, 0,1,2,3,…
7

i
(3) Limitation
lim   t  t i    t  t i  , lim  u t i      t  t i   u t  , lim g  t , t i   g t , t i 
 0 
 0
 0
i

lim  u t i    g  t , t i   y t    u  g t , d
 0

i

yt    u g t , d

For a causal system, yt  does not depend on the future inputs. So
g t ,   0,   t
yt    u g t , d
t

3. MIMO systems (DIDO example)
u1 t 
y1 t 
System
y 2 t 
u 2 t 
(1) Consider only y1 t 
u1 t 
y1 t 
System
u 2 t 
(2) Linear combinations of two fundamental systems
 t  t i 
g11 t , t i 
System
0
8
u1 t 
y11 t    g11 t , u1  d
System
0
g12 t , t i 
0
System
 t  t i 
y12 t    g12 t , u 2  d
0
System
u 2 t 
y1 t   y11 t   y12 t 
Then we get the impulse response matrix (eq. 2.5).
4. State space model
Linear + lumped
There exists a state space model.
2.3 Linear Time-Invariant (LTI) Systems
1. (i) Two examples
0
y1 t 
u1 t 
1
1
System
9
0
1
y 2 t 
u 2 t 
1
3
2
2
System
3
We observe that u 2 t   u1 t  2, y2 t   y1 t  2
LTI:
Both ut  and yt  are shifted by the same amount of time (2).
(ii) For general case, a system is LTI if
y (t )
u (t )
System
u ' (t )  u (t  T )
y ' (t )  y (t  T )
System
Both ut  and yt  are shifted by the same amount of time of T.
When the parameters of a system change slowly enough, it is LTI.
2. Impulse response of an LTI system

y t    u  g t , d


y ' t    u '  g t , d

We know that u ' (t )  u (t  T ) , y ' (t )  y (t  T ) .
(1) y' t  







u'  g t , d   u  T g t , d  u g t ,  T d
(2) y' (t )  y(t  T ) 

 u g t  T , d

So g t ,  T   g t  T , , t ,  , T
10
Let T   . Then
g t ,0   g t   , , t ,  ,
g t ,   g t   ,0  g t   

yt    u g t   d

3. Laplace transformation (defined only for LTI systems)
yt   L1  yˆ s 
pp. 39, Problem 2.9
Definitions: proper, strictly proper, biproper, improper.
3. State-space equation of a lumped LTI system
Op-amp circuit implementation.
2.4 Linearization
A nonlinear system - first order approximation
2.5 Examples.
11
12
13
Chapter 3 Linear Algebra
3.1 Introduction
Block operations of matrices
3.2 Linear space

n
1. Vector x  R
2. Some definitions
 

Linear independence. Given a set of vectors, {x1 , x2 ,, xm }
(i)




If c1 x1  c2 x2    cm xm  0  c1  c2    cm ,
 

then {x1 , x2 ,, xm } is linearly independent.


i, xi cannot be represented as a linear combination of other vectors., i.e., every xi provides
new information.
(ii)
Linear dependence



x1   x 2     m x m , no innovation.
(iii)
n
Linear space: A set S  R .
S is a linear space if
 


x , y  S  x  y  S ,  ,   R
 

3. Search for the maximum subset of linearly independent vectors among {v1 , v2 ,, vm } .
 
 
 

v1  0  v1 , v 2 is linearly independen t    {v1 , v 2 ,, v r }is linearly independen t  No more
 
 
{v1 , v2 ,, vr , vr 1} is NOT linearly independent




1v1   2 v2     r vr  vr 1  0,[1 , 2 ,, r ,  ]  0    0
 1 
  



2


 
  
v r 1  [v1 , v 2 ,  , v r ]  
  
 r 
 
  
 

{v1 , v2 ,, vr } is a basis.
 

coordination.

We can verify that {v1 , v2  v1 ,, vr } is also a basis.
14
There exist more than one basis.
Should all bases have the same number of vectors?
Suppose NOT. We have two bases.
 

 

{v1 , v2 ,, vr } and {z1 , z 2 ,, z m } . r  m .
 1 
 
 
  2 
(1) [v1 , v 2 , , v r ]
 0  1     r  0
 
 
 r 
 q11 
 q1r 
q 
q 


 
 
21 
2r 

(2) v1  [ z1 , z 2 , , z m ]
, , v r  [ z1 , z 2 , , z m ]
  
  
 
 
q m1 
q mr 
 q11  q r1 
 

 
[v1 , v2 ,, vr ]  [ z1 , z 2 ,, z m ]    
q m1  q mr 
 1 
 
 
2
We get [ z1 , z 2 , , z m ]Q    0,
 
 
 r 
 q11  q r1 
Q       .
q m1  q mr 
 1 
 
Qmr  2   0 , rank(Q) rank Q  m  r
 
 
 r 

[1 ,  2 ,,  r ]  0 . Contradict with (1). So r  m .
Similarly, we can get m  r . Then r  m .
4. Norm of vectors.
Page. 46.
5. Correlation of vectors
x, y   in1 xi yi .
 
 
x  y  x , y   0 .
A group of orthogonal vectors
v1 , v2 ,, vr ,


vi  v j , i  j
15
v1 , v2 ,, vr 
are linearly independent.
If furtheremore,
vi , vi   1, v1 , v2 ,, vr 
is an orthonormal basis.
Any basis can be transformed into an orthonormal basis by Schmidt orthonormalization
procedure.
3.3 Linear Algebraic Equations
1. Equation Ax  y
 x1 
 
 

 
  x2 



Amn  a1 , a 2 ,, a n , Ax  a1 , a 2 ,, a n 
 x1 a1  x 2 a 2    x n a n .

 
 xn 
 

So Ax is a linear combination of a1 , a 2 ,, a n .



(1) Range sapce of A : Range  A  Ax  x1 a1  x 2 a 2    x n a n , xi  R  R .
m
rank  A  rank A y
Existence of a solution: y  Range  A.
(2) Null space of A : Null  A  x | Ax  0  R
n
(3) Solution set of Ax  y : x p  Null  A . x p : A particular solution of Ax  y
Unique solution condition:
a1 , a2 ,, an 
Null  A  0
(4) Square matrices: A
1

is linearly independent.
Adj  A
1

cij ' , non-singular, full rank
det  A det  A
 
3.4 Similarity transformation
n
Suppose there are two bases for R ,
q1 , q2 ,, qn 
and
q1 , q2 ,, qn .
For any vector
v  R n , we have two “different” representations.
n
v   x1 q1  x 2 q 2    x n q n  Qx , Q  [q1 , q 2 ,, q n ]
i 1
n
v   x1 q1  x 2 q 2    x n q n  Q x , Q  [q1 , q 2 ,, q n ] . Q,Q are non-singular.
i 1
16


x  Q 1Q x  Px , P  Q 1Q is non - singular.
Qx  Q x
Px  APx  Bu
Original system: x  Ax  Bu
x  P 1 APx  P 1 Bu  A x  B u
A  P 1 AP : similarity transformation. It is actually a coordination transformation.
3.5 Diagonal Form and Jordan Form
1. Eigenvalue
 , eigenvector v  0
Av  v   A  I v  0  A  I  0
A  I  0  v  0,  A  I v  0  , v
2. Algebraic multiplicity
s   A-I    1  1   r  r ,
m
m
mi : the algebraic multiplicity of i .
3.
mi  1, i  1,, n  : All distinct eigenvalues.
We can get n eigenvectors, which are all independent of each other. These eigenvectors can
form a transformation matrix, P  [v1 , v2 ,, vn ] .
1
0
AP  [1v1 ,  v 2 ,, n vn ]  P 


0
1
0
1
J  P AP  


0
4.
0
2

0
0
2

0
0
 0 
 

 n 

0
 0 
 

 n 

mi  1 :
(1) For each i , we can find mi linearly independent eigenvectors. Then we can still get a
P by combining all eigenvectors. J  P 1 AP is still a diagonal matrix.
(2) For some i , we cannot find mi linearly independent eigenvectors, to say, only ni
ones. Then we have to derive
mi  ni 
17
generalized eigenvectors. All the eigenvectors
and generalized eigenvectors together form P .
J1 0  0 
0 J  0
2
,
J  P 1 AP  

  


 0 0  Jr 
 J i ,1 0  0 
 0 J
 0 
i,2

Ji 
,
 
   


0  J i ,ni 
 0
 i 1 0  0 
0  1  0
i


J i , p   0 0 i  0 


    
 0 0 0  i 
J i : mi  mi
3.6 Functions of a Square Matrix
1
1. A  PJP , J : Jordan form


 

A k  PJP 1 PJP 1  PJP 1  PJ k P 1
Jk ?
 1 0 


Example: J  0  1  I  T ,


 0 0  
0 1 0
T  0 0 1
0 0 0
0 0 1

 3
c
We see that T  0 0 0 , T  0, T  0, c  3 ; T  I  I  T


0 0 0
2
J k  I  T   k  C k1 k 1T  C k2 k  2T 2
k
 k

  0
0

kk 1
k
0
k k  1 k  2 
 
2

kk 1


k

2. Polynomials of a square matrix
(1) A simple example:
18
 1 0 
f x   a 0  a1 x  a 2 x  a3 x , J   0  1 


 0 0  
2
3
f  A  a 0 I  a1 A  a 2 A 2  a3 A 3


 P a 0 I  a1 J  a 2 J 2  a3 J 3 P 1
 a 0

 P  0


 0
0
a0
0
0  a1
0    0
a 0   0
a1
a1
0
 a 0  a1  a 2 2  a3 3

 P 
0
 
0

 a 2 
 0
 
a1   0
0
a1
2
a 2  2
a 2 2
0
0  a1  a 2  2  a3  32
a 0  a1  a 2 2  a3 3
0
a 2   a 3 3
 
a 2  2    0
a 2 2   0
a3  32
a 3 3
0
0  0  a 2  a3  3  

0  a1  a 2  2  a3  32   P 1

a 0  a1  a 2 2  a3 3  


d
1 d2
  f  
f  
f   
2
d
2! d


d


 P 0
f  
f     P 1

d

 0
0
f    

 




(2) A general case
 1 0 


2
3
4
For a general polynomial, g  x   a 0  a1 x  a 2 x  a3 x  a 4 x   , J  0  1


 0 0  
g  A  a 0 I  a1 A  a 2 A 2  a3 A 3  a 4 A 4  


 P a 0 I  a1 J  a 2 J 2  a3 J 3  a 4 J 4   P 1


1 d
1 d2
1 d3
1 d4










g

g

g

g

g



 P 1
2
3
4
P
1
!
d

2
!
3
!
4
!
d

d

d








 0
J1
0
For general J , J  


0
0
J2

0
 J1k
0

 0 
0
k
, J 

 



 Jr 
 0

0
 g J 1 
 0
g J 2 
g  A  Pg J P 1 , g J   
 


0
 0
0
k
J2

0
0 

0 

 

 g J r 

19
a3  3  

a3  32   P 1

a3 3  


,

k
 J r 



0
0

 J i ,1
0
For J i  
 

 0
0
J i,2

0
0 
0
 g J i ,1 


 0 
0
g J i , 2 
, g J i   
 
  



 J i ,ni 
0
 0

 
dim J i   dim J i ,1   dim J i , 2     dim J i ,ni . Let i

So g  J i  is composed of  g i ,
1 d
g i , ,
1! d


0

0








 
 max dim J  .
 g J i ,ni
i, j
1 j  ni

1
d i 1
g i  .
i 1
 i  1! d

 i : geometric multiplicity.
 i  mi (algebraic multiplicity)
For
each
i
,
 g i   0,
d
 g i   0,
 d


 1
d mi 1
g i   0

mi 1
 mi  1! d
if
,
then
g J i   0
g J   0  g  A  0
(3). Characteristic equation.
Consider a particular polynomial,
g       A-I    1  1   r  r . We can easily get g  A  0 . (Theorem 3.4,
m
m
pp. 63)
g  A  A n   1 A n 1   2 A n 2     n I  0 . So
A n   1 A n 1   2 A n  2     n I
Ak   k ,1 An1   k , 2 An2     k ,n I
and
k  n .
Any polynomials of A , f  A , is equal to an polynomial of A with the order up to n  1 .
Another way to get the above result.
f    q    h  , degh   deg   n
f  A  q A A  h A  q A0  h A  h( A) . What is h  ?
h    0  1     n 1n 1 . What are  0 , 1 ,,  n1 ?
For   i , we get the following mi equations
20
,




f i   qi i   hi   hi 
d
d
qi i   d hi   d hi 
f i  
d
d
d
d

d mi 1
d mi 1
d mi 1
d mi 1










f


q




h


hi 
i
i
i
i
dmi 1
dmi 1
dmi 1
dmi 1
For all
i i  1,, r  , we have m1  m2    mr   n equations which are enough to
determine
 0 , 1 ,,  n1 . Then it is easy to see
f  A  h(A) .
Any function f   can be expressed as a polynomial by the Taylor’s series. So it can be
accordingly computed by h A .
Example 3.8, pp. 65.
3. Minimal polynomial
 J i ,1
0
Ji  
 

 0
0
J i,2

0
0 
0
 g J i ,1 


 0 
0
g J i , 2 
, g J i   
 
  



 J i ,ni 
0
 0


0

0








 
 g J i ,ni
g J i   0 ,
g   must be able to be divided by   i  i .
g J   0 ,
g   must be able to be divided by p      i  i .


i
p  is the lowest order polynomial to guarantee p A  0 .
3.7 Lyapunov Equation
AM  MB  C
All entries of AM and MB are linear functions of entries of M . So we have a linear
equation. Rewrite it into Ax  y and find its solution.
A more systematic way is to utilize the Kronecker product so that the linear matrix equation is
transformed into a common vector equation.
3.8 Some Useful Formulas
  AB  min   A,  B
21
3.9 Quadratic Form and Positive Definiteness
M  M T ,  M  are all real and simple (no generalized eigenvector is needed).
(1) Proof: Suppose Mx  x , x  0,
 is complex.
x * Mx  x * x .
Taking the conjugate operation over the above equation yields
 x * x  x * M * x  x * Mx  x * x
Then (   ) x x  0    
*
M  0   M   0
(2) No generalized eigenvector: 3.67-3.68, pp. 73.
3.10 SVD
pp 76-77.
3.11 Norms of Matrices
Induced norms.
Homework: 3.1, 3.3, 3.13, 3.14, 3.17, 3.18, 3.21, 3.31, 3.32, 3.33, 3.38
22
Chapter 4 State-Space Solutions and Realizations
4.1 Introduction
We have a causal linear system
y t    g t , u  d (relaxed at t 0 )
t
t0
Distributed or lumped.
Under a given input ut  , what is the output yt  ?
1. Discretization
yk  
k
 g k, m um 
m  k0
 : conflict between computational complexity and precision
2. Laplace method (only for LTI systems)
yˆ s   gˆ s uˆs ,
gˆ s   Lg t 
yt   L1  yˆ s 
It requires computing poles, carrying out partial fraction expansion, and is sensitive to small
changes in data.
3. State-space method (only for lumped systems)
Integration
differention
 x t   At xt   Bt u t 
, ode (ordinary differential equation)

 y t   C t xt   Dt u t 
The existence and uniqueness of solution of an ODE.
 x t   hxt , t 

n
 xt 0   x0  R
If
hx, t 
 B   for t  t1 , t 2 t 0  t1 , t 2  , then the ODE has a unique solution xt 
x
( t  t1 ,t 2 ).
It is possible that t 0  t1 , i.e., the equation can be solved backwards.
4.2 Solution of LTI state equations
 x t   Axt   Bu t 

 yt   Cxt   Du t 
1. Some formula regarding e
At
23
e At  I  k 1

(1)
k
1
 At k  I  k 1 t A k
k!
k!
d At
e :
dt
k 1
d At
 t
e  0  k 1
Ak
dt
k 1
l
 t
 l 0 A l 1 l  k  1
l!
l
 t
 Al 0 A l  Ae At
l!
  tl

  l 0 A l  A  e At A
l! 

d At
e  Ae At  e At A
dt
 
(2) L e
At
k


 t
L e At  L  I  k 1 A k 
k! 

k
 A
 I  L1  k 1
L tk
k!
k
1
k!
 A
 I  k 1
s
k! s k 1
 
 

1
1   A
I  k 1  
s
s
s
k
k


 Q  I    A  
k 1

 s  

1
 Q
s
2
A
A  A
Q       Q  I
s
s s
A

  I  Q  I
s

A

 Q  I  
s

1
 ssI  A
   sI  A
So L e
1
At

e At  L1 sI  A
1
1
.

At
(3) e :
24
 1 0 
A  QJQ , J   0  1  = I  T , T k  0k  3
 0 0  
1
e At  Qe Jt Q 1
I  T  T  I 
e Jt  e It e Tt

1 t
 e  t I  0 1

0 0

 t
e
  0
0

1 2
t
2! 
t 

1 

1 2 t 
t e 
2!
te t 
e t 

te t
e t
0
Another two ways to compute e
(i) e
Jt
(ii) e
Jt
Jt
 c0 t I  c1 t J  c 2 t J 2 , e xt  c0 t x  c1 t x  c 2 t x 2

 L1 sI  J 
1
,
sI  J 1  ?
sI  J 1   0 sI  1 s J   2 sJ 2 , sI  x1   0 sx  1 sx   2 sx 2
 x t   Axt   Bu t 
, x0  x0
 yt   Cxt   Du t 
2. Solution of 
(1) Verify its solution is xt   e x0 
At
e
A t  
e e
At
 A
:
t
e
At  
0
Bu  d
e At     0 t ,   1 t , A     n1 t , A n1
e At e  A   0 t ,   1 t , A     n1 t , A n1
 i t ,    i t , 
e x t    e xt e  x
xt   e At x0   e At   Bu  d
t
0
 e At x0  e At  e  A Bu  d
t
0
25
e At    e At e  A
 
  e
d
d At
d At
xt  
e x0 
e
dt
dt
dt
t
 A 

0
Bu  d  e At
d  t  A
 e Bu  d 

dt  0
 Ae At x0  Ae At  e  A   Bu  d  e At e  At Bu t 
t
0
 A e At x0  e At  e  A   Bu  d   Bu t 
0


 Axt   Bu t 
t
(2) Laplace transform
Lxt   LAxt   Bu t 
sxˆs   x0  Axˆs   Buˆs 
1
1
xˆ s   sI  A x0  sI  A Buˆ s 
xt   L1 xˆ s 



sI  A * L

 L1 sI  A x0  L1 sI  A Buˆ s 
1
 e At x0  L1
1
1
1
Buˆs 
 e At x0   e At   Bu  d
t
0
(3) Zero input response ( ut   0 )
xt   e At x0
 1 0 


When A  QJQ , J  0  1 ,


 0 0  
1
xt   Qe Jt Q 1 x0
 t
e
 Q  0
0

te t
e t
0
1 2 t 
t e 
2!
te t Q 1 x0
e t 

So xt  is the linear combination of e , te , t e
t
t
2 t
3. Discretization: xkT   ?
(1) x t  
xk  1T   xkT 
T
xk 1T   I  TAxkT   TBu kT 
26
(modes)
(2) xt   e At x0 
t
e
At  
0
Bu  d , t  k  1T
xk  1T   e Ak 1T x0  
 k 1T
0
e Ak 1T   Bu  d
kT
 k 1T
0
kT
 e Ak 1T x0   e Ak 1T   Bu  d  
e Ak 1T   Bu  d
kT
 k 1T A k 1T  
 e AT e AkT x0   e AkT   Bu  d   
e
Bu  d

 kT
0
 e AT xkT   
 k 1T
kT
e Ak 1T   Bu  d
 e AT xkT    e AT  z  Bu  z  kT dz
T
0
z    kT 
When ut   u[k ], t  [kT, k  1T ) ,
T
xk  1T   e AT xkT     e AT   Bd  u[k ]
 0

T
 e AT xkT     e AT   d  Bu[k ]
 0

T

0
e AT   d  ?
f x    e x T   d ,  e AT   d  f  A
4.
T
T
0
0
Discrete-time systems
pp. 92-93. Modes:
k , kk 1 , k k  1k 2 ,
4.3 Equivalent state equations
 x t   Axt   Bu t 
 yt   Cxt   Du t 
1. 
(i) Similarity transformation:
1
New state x  Px  x  P x
P 1 x  AP 1 x  Bu
 x  PAP 1 x  PBu  A x  B u

1
 y  CP x  Du  C x  D u
A  PAP 1 , B  PB, C  CP 1 , D  D

Transfer functions: G s   C sI  A B  D , G s   C sI  A
1
We can verify that Gs   G s  ,
 A, B, C, D  A, B , C , D 
27

1
BD
Why do we need the similarity transformation?


Simplification. Simpler A . Canonical forms in 4.3.1, pp. 97
Acceptable signal magnitude (Example 4.5, pp. 99).
(ii) What does Gs   G s  mean?
CAm B  C A m B , m  0,1,2, (Theorem 4.1)
A, A may have different dimensions. (example 4.4, pp. 96)
To reduce cost, we like the lower dimension.
4.4 Realizations
yt    g t   u d
(linear, causal, time-invariant)
yˆ s   Gˆ s uˆ s 
lumped
t
0
 x t   Axt   Bu t 

 yt   Cxt   Du t 
Theorem 4.2 Ĝ s  is realizable
Ĝ s  is a proper rational matrix
1
Gˆ s   C sI  A B  D
Proof: (i) Realizable
(ii) Ĝ s  is a proper rational matrix
?
Gˆ s   Gˆ   Gˆ sp s  , Gˆ sp s  is strictly proper.

1
1
Gˆ sp s  
N s  
N1 s r 1  N 2 s r 2    N r
d s 
d s 

d s   s r  1 s r 1     r
Define a state space equation
  1 I p
 I
 p
x   0

 
 0

y  N 1
2I p
   r 1 I p
0

0
Ip

0

0



Ip
N 2  N r 1
r I p 
I p 

0
0 
 
0  x   0 u

 
 

 0 
0 
N r x  Gˆ  u
28
C sI  A B  ?
1
 z1 
I 
z 
0 
1
2

Z  sI  A B  sI  A
B 


 
 
0 
 zr 
s  1 I p
 I
p

 0

 
 0

 2 I p   r 1 I p  r I p   z1 
sI p
 Ip

0




0
0

 Ip
0
0

sI p
I p 
   
z2   0 
  z3    0 
   
     
  z r   0 

The second line: z1  sz 2  s z 3    s
2
The third line: z 2  sz 3    s
r 2
r 1
zr
zr

The last line: z r 1  sz r






The first line: s   1 I p z1   2 I p z 2      r I p z r  I p
s
r

 1 s r 1     r z r  I p
zr 
1
s  1s
r
z r 1  sz r 
r 1
r
Ip 
1
Ip
d s 
s
Ip
d s 

z1 
s r 1
Ip
d s 
29
C sI  A B  D  N 1
1
 N 1
N 2  N r 1
 z1 
z 
 2 
N r     Gˆ  


 z r 1 
 z r 
N 2  N r 1
 s r 1 
Ip



d
s
 r 2 
s I 
 d s  p 
N r     Gˆ  
 s


Ip
 d s  
 1

Ip

 d s  
 Gˆ s 
Example 4.6 v.s. Example 4.7 pp. 103-105
We may get realizations with different dimensions.
4.5 Solution of linear time-varying (LTV) equations
 x  At x  Bt u
, x0  x0
 y  C t x  Dt u
1. Model: 
Time-varying parameters, At , Bt , Ct , Dt  .
What is its solutions?
LTI: At   A, Bt   B, Ct   C, Dt   D.
(i)
xt   e At x0   e At   Bu  d
t
0
t
t
 A d x  t e  As ds Bu  d in the most cases.
LTV: xt   e 0
0

(ii)
0
Why? Take ut   0 . Suppose the equality holds. Then
t
A d
xt   e 0
x
0

t




A d
1 t
t
t
e 0
 I  0 A d  0 A d 0 A d  
2!
d  0 A d
e
dt 
t






1
1
  0  At   At  0t A d  0t A d At   
2!
2!

 

t
t
At  0 A d  0 A d At  in most cases.
30
 0
0 0  t 0 0 
1 2
,
d


2  
2
 2 t
 t t  0   
E.g. At   
0 0   0
t
1 2
At  0 A d  
2
t
t

  2 t

 0
 A d At    1 t 2
 2

0   0
1 3   1 4
t
t
3   2
0 
1 5
t
3 
0  0 0   0
1 3 
 1
t   t t 2   t 4
3 
3
0 
1 5
t
3 


t
0

 

So At  0 A d  0 A d At  and
t
0 
1 3
t
3 
t
d  0 A d
e
dt 
t
A d

  At e 0
.

t
t
A d
xt   e 0
x0 is not a solution to the T-V equation.
2. Zero input solutions of a T-V equation.
 x  At x

n
 xt 0   x0  R
xt 0   x0

Consider x0  x0 ,, x0
(1)
xt 0   x0
i 
Linear
System
( n)
xt 
: A set of linearly independen t vectors, a basis.
Linear
System
xt   x i  t 
Linearity? Verify it.
xt 0    i x0   j x0
i 
 j
Linear
System
xt    i x i  t    j x  j  t 
xt 0   0  xt   0
31

More general, xt 0   x0 , , x0
(1)
(n)
 1 
   , then
 
 n 

 1 
 1 


xt   x t ,, x t      X t   
 n 
 n 

(1)

(n)

Fundamental matrix: X t   x
(1)
t ,, x ( n) t 
X t   0, t . Why?

(i) t  t 0 , X t 0   x0 ,, x0
(1)
(n)
  0.
(ii) t  t 0 , X t   0 ?

Suppose X t1   0, t1  t 0 . X t1   x
(1)
t1 ,, x ( n) t1 .
 1 
 1 
 1 




 
(1)
(n)
Due to X t1   0, X t1    x t1 ,, x t1    0,   0
 
 
 
 n 
 n 
 n 

xt1   x i  t1 

xt 0   x i  t 0 
Linear
System
By linearity of the ODE system,
 1 
xt1   x t1 ,, x t1    
 n 

(1)
 1 
xt 0   x t 0 ,, x t 0    
 n 

(n)

Linear
System
(1)
(n)
 1 
 
But in this case, xt1   x t1 ,, x t1    0 . So xt 0   0 , i.e.,
 
 n 

(1)
(n)

32

 1 
x t 0 ,, x t 0      0 .
  n 

(1)

(n)

So X t 0   x0 ,, x0
(1)
(n)
 1 
    0.
 
  n 
  0 . Contradiction.
(iii) t  t 0 , X t   0 ?
Note that an ODE can be solved backwards.
3. State transition matrix
When X t 0   0 , X t   0, t .
So we can define a state transition matrix
 t , t 0   X t X 1 t 0  . It is possible that t  t 0 .
(1)
 t , t1  t1 , t 0   X t X 1 t1 X t1 X 1 t 0 
 X t X 1 t 0 
  t , t 0 
(2)
 t , t   X t X 1 t   I
(3)

 t , t 0   ?
t

, t , t1 , t 0  can have arbitrary relationships.

X t1   x (1) t1 ,, x ( n ) t1 


d
d
d

X t    x (1) t ,, x ( n ) t   At x (1) t ,, At x ( n ) t 
dt
dt
 dt



 At  x (1) t ,, x ( n ) t 
 At X t 




 t , t 0  
X t X 1 t 0    X t  X 1 t 0    At X t X 1 t 0 
t
t
 t

 At  t , t 0 


0 0 
 xt 
t
0


Example: x t   
x1  0  x1 t   x1 t 0 
x 2  tx1 t   tx1 t 0   x2 t   x2 t 0  


1 2
2
t  t 0 x1 t 0 
2
33
(i) x
(1)
(ii) x
1

1
(1)
1



2

x
t

2

 2 t  t 0
0
t 0   
( 2)





t 0   
0
0
 x ( 2) t    

1
1
1

1
X t    2
2
 2 t  t 0


0
1 0
, X t 0   

1
0 1


1
 t , t 0   X t X 1 t 0    1 t 2  t 2
0

2


0

1

4. Solution of a LTV equation.
 x  At x  Bt u
, x0  x0

 y  C t x  Dt u
xt    t , t 0 x0    t , B u  d . Verify it.
t
t0
(i) xt 0    t 0 , t 0 x0 
  t , B u d  Ix
t0
t0
0
0
 0  x0
t  
d


xt    t , t 0 x0     t , B u  d   t , t Bt u t 
t
0
dt
t
 t

 At  t , t 0 x0   At  t , B u  d  Bt u t 
t
(ii)
t0
t
 At   t , t 0 x0    t , B u  d   Bt u t 
t0


 At xt   Bt u t 
y t   C t  t , t 0 x0   C t  t , B u  d  Dt u t 
t
t0
xt 0   0
y t    C t  t , B   Dt  t   u  d
t
t0
  G t , u  d
t
t0
G t ,   C t  t , B   Dt  t   
4.5.1
Discrete-time case
xk  1  Ak xk   Bk uk 
yk   C k xk   Dk uk 
State transition matrix:
 k , k 0   Ak  1Ak  2 Ak 0 , k  k 0
,
 k , k   I
34
Solution:
xk    k , k 0 xk 0  
k 1
 k , mBmum
m  k0
 k , k 0  could be singular.
4.6. Equivalent time-varying equations
1. T-V transformation:
 x  At x  Bt u

 y  C t x  Dt u
Let x t   Pt xt , Pt   0, t . Then
xt   P 1 t x t 
(i)
d 1
P t   ?
dt
Pt P 1 t   I




d
Pt P 1 t   0
dt
d
d

d

Pt P 1 t    Pt  P 1 t   Pt  P 1 t   0
dt
 dt

 dt

 d 1 
d
 1
1
 P t    P t  Pt  P t 
 dt

 dt

x  P t x  Pt x  P t x  Pt  At x  Bt u 
1
1
(ii)  P t P t x  Pt  At P t x  B t u
 P t P 1 t   Pt At P 1 t  x  Pt Bt u




A t , B t , C t , D t   
2. Bounded x
? Bounded x
x t   xt   xt   e t x0  lim xt   
x t   e
t  
 2t
xt   e x0   lim x t   0
t
t  
Source of the problem: Unbounded Pt   e
( xt   Pt x t  )
2t
3. Lyapunov transformation: Both Pt  and P
1
35
t 
are bounded for any t .
4.7. Time-varying realizations
y t    G t , u  d
t
t0
Can it be realized by a time-varying state equation?
Theorem 4.5 Gt ,  is realizable
Gt,   M t N    Dt  t   
Proof: (i) Gt ,  is realizable. Then
 x  At x  Bt u

 y  C t x  Dt u
y t    G t , u  d
t
t0
G t ,   C t  t , B   Dt  t   
 t ,   X t X 1   .
  B   Dt  t   
So Gt ,   C t X t  X
1
(ii) Gt ,   M t N    Dt  t    . Then
x  N t u t 
y  M t xt   Dt u t 
 t ,   I
xt    N  u  d
t
t0
yt   M t  N  u  d  Dt u t    M t N    Dt  t   u  d
t
t
t0
t0
Homework: 4.2, 4.5, 4.8, 4.12, 4.15, 4.17, 4.19, 4.20, 4.22, 4.26
36
Chapter 5 Stability
5.1 Introduction
x0
y (t )
System
u(t ), t  0
x0
x0  0
y (t )
System
y (t )
System
u(t )  0, t  0
u(t ), t  0
Zero-state response
Zero-input response
x0   x0   0
u t   0  u t 
Response=zero-input response + zero-state response
5.2 Input-Output stability of LTI systems (zero-state response)
1. Definitions
(1). Model yt  
 u g t   d   ut   g  d
t
t
0
0
(causal)
(2). Boundedness:
Bounded input-
Bounded output-
ut   u m  
yt   ym  
(3).BIBO stability (scalar case, Theorem 5.1, pp. 122)

 g t  dt  M  
Bounded inputs yield bounded output
0

 g t  dt  M  
Proof: (i)
0
?
y t    u t     g   d
t
0
  u m  g   d
t
0
t
   g   d u m
 0

 Mu m  y m  
37
(ii) BIBO stable
?
yt   ym   . Suppose
We know

 g t  dt  
0
T ,  g t  dt  2 y m
T
0
sgn g T  t , 0  t  T
. Then
0,
other

Let u t   
y T    u T   g  d
T
0
  g   d
T
0
 2 ym
 ym

We get a contradiction. So
 g t  dt  M  
0
2. The outputs under several special inputs (Theorem 5.2)
ut 
yt 
g t 
(1) Step input, the output converges to a constant function.
(2) Sinusoidal input, the output converges to a sinusoidal function.
(3) Any input not converging to 0, the output will converge to the “unstable” input mode.
3. Proper rational: ĝ s 
Theorem 5.3 (pp. 124) BIBO stable
all poles are in the open left half plane
gˆ s   gˆ    gˆ sp s 
k1,n1
 k1,1
 gˆ    

s  p1 n
 s  p1

 




g t   L1 gˆ s   gˆ  t   k1,1e p1t    k1,n1 t n1 1e p1t  
4. BIBO stability of MIMO systems
Theorem 5.M1(pp. 125)


All g ij t  are BIBO stable
BIBO of G t   g ij t 


Proof: (1) BIBO of G t   g ij t 
?
38
Consider only yi t  , u l t   0, l  j . We get a SISO system with the input of u j t  and the
outpout of yi t  . So g ij t  must be BIBO stable.
(2) All g ij t  are BIBO stable
y i t  


t
0
g i1  u1 t   d     g im  u m t   d
t
0
 g  u t   d
t
i1
0
?
1

 g  u t   d
t
0
im
m
SISO systems
So the boundedness of the outputs of all SISO systems guarantee that of the overall system.
It is not easy to check all g ij t  . We can check whether


Gt  dt  M  
0
where  represents any norm (due to the equivalence of all norms).
5. BIBO stability of a state space equation
 x t   Axt   Bu t 

 yt   Cxt   Du t 
G s   C sI  A B  D  g ij s 
1

is rational proper.
1
C  Adj sI  AB  D
sI  A
All poles of g ij s  are poles of sI  A  0 , i.e., the eigenvalues of A .
If A is stable, the system is BIBO stable. Due to possible pole-zero cancellation, BIBO stable
systems may not have stable A . Example 5.2 (pp. 126).
5.2.1
Discrete-time case
1. Model yk  

k
m 0
gk  mum  m0 gmuk  m
k
BIBO stability: bounded input, bounded output.


k 0
gk   M  
All poles of g z  have magnitude strictly less than 1.
39
5.3 Internal stability
1. Model: zero-input response
x  Ax , x0  x0
xt  , the state, is treated as the response.
xt   e At x0
Equilibrium: 0 is the desired working point.
2. Marginal stability or stability in the sense of Lyapunov
(1) Definition:
Every finite x 0 excites a bounded state response xt  .
x0  M 0    xt   M  , t  0
(2) A necessary and sufficient condition:
Re i   0,
or
Re i   0, the Jordan block of i  is diagonal
where the eigenvalues of A are denoted as
i i  1,, n 
Proof: The Jordan form of A :
J1 0  0 
0 J  0
2
1
,
J  P AP  

  


 0 0  Jr 
 J i ,1 0  0 
 0 J
 0 
i,2

Ji 
,
 
   


0  J i ,ni 
 0
 i 1 0  0 
0  1  0
i


J i , p   0 0 i  0 


    
 0 0 0  i 
J i : mi  mi
A similarity transformation: x  Px
x  Jx , x 0  Px0
xt  is bounded
x t   e Jt x 0
x t  is bounded
40
e J1t

0
Jt
e 
 

 0
e J it
e
Ji, p
e J 2t

0
e J i ,1t

0

 

 0
0 

 0 
  

 e J r t 

0
0
e
J i , 2t

0
 it
e

 0


 0

 

 0
0 

0 
, J i , p : ni , p  ni , p

 
J t
 e i , ni 


te it
1 2 i t
t e
2

e it
te i t

0
e i t


0

0


1

n 1
t i , p e i t 
ni, p  1!

1
ni , p 1 i t 
ni, p  2! t e 
1
n 1
t i , p e i t 

ni, p  3!



e i t
 n
i , p  ni , p
(i)
When Re i   0 , lim t e
(ii)
When Re i   0 , e
m i t
t 
(1) . (i) + (ii)
 0, m .
 1, t  0 ; t m e i t is unbounded for m  1, t  0 .
i t
Marginal stability
under any initial state x 0 , x t  is always stable.
(2). Marginal stability
Choose x 0  ei (the i  th entry is 1 and the others are 0). We get all columns of e
t
t
bounded, which are composed of e , te , . So we get (i) and (ii).
3. Asymptotic stability
(1) Definition:
Stable + lim xt   0
t 
lim xt   0
t 
lim x t   0
t 
(2) A necessary and sufficient condition:
Re i   0.
5.3.1 D-T case
xk 1  Ax , x0  x0
xk   A k x0
41
Jt
is
J1 0  0 
0 J  0
2
,
J  P 1 AP  

  


 0 0  Jr 
 J i ,1 0  0 
 0 J
 0 
i,2

Ji 
,
 
   


0  J i ,ni 
 0
 i 1 0  0 
0  1  0
i


J i , p   0 0 i  0 


    
 0 0 0  i 
J i, p
k
 k
 i


0


0

 
0

ki
k 1
i k
J i : mi  mi
k k  1 k  2
i

2!
ki
k 1

0
i k




0
0

So stability is related to
k k  1 k  ni , p  1


ni, p  1!

k k  1 k  ni , p  2 k  ni , p  2 
i

ni, p  2!

k k  1 k  ni , p  3 k  ni , p 3 
i

ni, p  3!



k

i

i k  n
i , p 1
i .
5.4 Lyapunov theorem
1. Motivations:
(1) In the previous sections, we determine stability of LTI systems by analytically solving an ode
(ordinary differential equation). Then we determine stability according to the obtained
solution. For general nonlinear systems, it is difficult, or impossible, to get the desired
analytic solution. Then how to determine stability?
(2) Lyapunov (direct) method (“direct” means that we doesn’t need to solve the ode first. We just
“directly” determine stability from the ode.
xt   f xt , t 
(i)
Lyapunov function V x, t 
V x, t   0; V x, t   0  x  0

K1  x   V x, t   K 2  x 

K1  x , K 2  x  are monotonically increasing and positive functions.
42
d
V x, t   0  marginally stable
dt
(ii)
d
V x, t   0x  0  aymptocial ly stable or
dt
d

V  x, t   0

dt
  aymptocial ly stable
d
V x, t   0  xt   0
dt

2. Implement Lyapunov method to a linear system:
x  Ax
(1) Choose P  0 , V x   x Px
T
(2)


d
V x   x T PA  AT P x
dt

Let Q   PA  A P
T

If Q  0, asymptotic ally stable

marginally stable
If Q  0,
The key is the Lyapunov euation
AT P  PA  Q
(3) Conversely, if for a Q , the Lyapunov equation has a positive solution P . Then
If Q  0, asymptotic ally stable

marginally stable
If Q  0,
3. Lyapunov equation
AT M  MA   N
Theorem 5.5(pp. 132).
Re i  A  0 (i  1,, n)  N  0, M must exist and is unique, M  0
Proof: (i) Re i  A  0 (i  1,, n)  ?

M   e A t Ne At dt  0(why ?)
T
0



AT M  MA   AT e A t Ne At  e A t Ne At A dt
0


0
T
T


d AT t At
e Ne dt
dt
 e A t Ne At |0
T

 0  N e At  0, e At  I
t 
t 0

43
M is a solution.
Under Re i  A  0 (i  1,, n) , the Lyapunov equation has a unique solution. An
alternative proof is shown in Theorem 5.6 (pp. 134).
(ii) N  0, M must exist and is unique, M  0  ?
Suppose Av  v( , v can be complex) .
v * AT  * v *
T
*
T
Left-multiply A M  MA   N by v and right-multiply A M  MA   N by v , we
get
v * AT Mv  v * MAv  v * Nv
 v Mv  v M v  v Nv
* *
*
*
   v Mv  v Nv
*
*
*
M  0, N  0
Re   0
5.4.1 D-T case
We again need a Lyapunov function
V xk , k   0; V xk , k   0  x  0

K1  x   V x, t   K 2  x 

V k   V xk  1, k  1  V xk , k 
V k   0  marginally stable
V k   0x  0  aymptocial ly stable or
V k   0

  aymptocial ly stable
V k   0  xk   0
For xk 1  Axk  .
V x   x T Mx
We can get Theorems 5.D5, 5.D6.
44
T
Note that the solution of M  A MA  N is
 
M  k 0 AT NAk

k
5.5 Stability of LTV systems
1. Model (input-output)
y t    g t , u  d
t
t0
2. BIBO Stability conditions:
 g t ,  d  M  , t , t
t
(1) SISO- BIBO stability
t0
0

 g t ,  d  M  , t
t0
0
 Gt ,  d  M  , t , t
t
(2) MIMO- BIBO stability
t0
0

 Gt ,  d  M  , t
t0
0
(3) State space model:
 x  At x  Bt u

 y  C t x  Dt u
The zero-state response is
y t    G t , u  d
t
t0
Gt ,   Ct  t , B   Dt  t   
Dt   M 1  ,  C t  t , B  d G t ,   M 2  , t , t 0
t
BIBO stability
t0
3. Internal stability
(1) Zero-input response
x  At x , xt 0   x0
Response: xt    t , t 0 xt 0 
(2) Marginally stable
Asymptotically stable
 t , t 0   M  , t , t 0
  t , t 0   M  , t , t 0
 lim  t , t   0, t
0
0
 t 
Eigenvalues can not be used to determine the stability of LTV systems.
Example 5.5 (pp. 139)
45
 1 e 2t 
e t
,







t


t


1
,

t
,
0

x  
x


1
2
0
 0  1


0.5 e t  e t 

e t

(3). Marginal stability and asymptotic stability are invariant under any Lyapunov transformation.
x  At x , x t   Pt xt  ,
x t    t , x t 0 
 t ,   Pt  t , P 1  ,  t ,   P 1 t  t , P 
Pt   L  , P 1 t   L'  , t
Homework: 5.6, 5.11, 5.13, 5.14, 5.17, 5.20, 5.21, 5.22, 5.23
46
Chapter 6 Controllability and observability
6.1 Introduction
Input
State
u
x
Output
y
Controllability, reachability: u  x
Observability: x  y
6.2 Controllability
1. Model:
x t   Axt   Bu t , x  R n , u  R p
xt   e At x0   e At   Bu  d
t
0
2. Reachability:
(i) Require: x0  0
xt    e At   Bu  d
t
0
(ii). Reachable state x1 :
t1 , u t ,0  t  t1 ,
u t , 0t t1
x0  0 
Finite t1
xt1   x1
(iii). Reachable set: the set of all reachable states

S R  x1 | x1   e At   Bu  d , t , u ,0    t
t
0
S R is a linear space. Why?
The system is reachable iff S R  R .
n
Structure of S R :
e At    l 0 cl t   Al (Cayley-Hamilton Theorem)
n 1
47

 e Bu  d 
   c t   A Bu  d 
   c t   A Bu  d 
  A B  c t   u  d 
SR 
t
A t  
0
t
n 1
0
l 0
l
l
n 1 t
l
l 0 0
n 1
l
t
l
l 0
l
0


 span B, AB,  , A n 1 B  span CTRB 
It is necessary for reachability ( S R  R ) that
n
(a)
spanCTRB   R n
(b) What is a sufficient condition for reachability?
?
reachability?
spanCTRB   R n 

 t1  0,Wc t1   0
Claim: spanCTRB   R 
n
Proof: Suppose t1 ,Wc t1  is singular. Then there must exist a non-zero vector v ,
Wc t1 v  0
v T Wc t1 v  0

t1
0
v T e At1   BB T e A
T
 v
t1
0
T
t1  
vd  0


e At1   B v T e At1   B d  0
T
f t   v T e At B  0, t  0, t1 
dl
f t   0, l  0,1,, n  1; t  0, t1 
dt l
v T e At Al B  0, t  0, t1 



v1 B, AB, A 2 B,, A n 1 B  0 , v1  v T e At  0 v T  0, e At  0
T
T
 B, AB, A2 B,, An1 B  n, dim spanCTRB   n
48

Contradiction!
When spanCTRB   R , take any t1  0 and any target state x1 , let
n
ut   B T e A
T
Wc t1  x1 , t  0, t1  ,
t1 t 
1
We get
xt1    e At1   BB T e A
t
T
Wc t1  x1 d
t1 t 
0
   e At1   BB T e A
 0
t
T
t1 t 
 Wc t1 Wc t1  x1
1
1
d Wc t1  x1

1
 x1
So spanCTRB   R 
 reachability.
n
(c). Other equivalent reachability conditions
 B, AB, A2 B,, An1 B  n
Proof: (i)
 I  A, B  n, 
 B, AB, A2 B,, An1 B  n
?
 I  A, B  n .
When
 is not an eigenvalue of A , I  A  0
When
 is an eigenvalue of A : Suppose  I  A, B  n . Then there exists v  0 such
that
v T I  A, B  0
v T B  0, v T  v T A
v T AB  v T B  0,



v T B, AB, A2 B,, An1 B  0,
 B, AB, A2 B,, An1 B  n . Contradiction!
 I  A, B  n, 
(ii)
Suppose

?
 B, AB, A2 B,, An1 B  n . Then here exists v  0 such that

v T B, AB, A2 B,, An1 B  0,
v T B  0,, v T A n1 B  0
49
v T Al B  0, l  n (Cayley-Hamilton Theorem)

V  v | v T Al B  0, l  N
 is a linear space (it is close under the operations of addition and
scalar multiplicity).
V is not empty. V has a basis v1 , v2 ,, v s , V  spanv1 , v2 ,, v s 
Any vector v  V , we have a linear coordination
 y1 
v   j 1 y j v j  v1 , v2 ,, v s y, y    
 y s 
s
We can verify that any v  V ,
AT v  V ,
A v 
T
T
Al B  v T Al 1 B  0, l  N
So A v j  V , j  1,, s
T

 z11 

 
 AT v  v , v , , v  z 21 
1
1
2
s

  

 
 z s1 



 z1s 

z 

 AT v s  v1 , v 2 , , v s  2 s 
  

 

 z ss 

AT v1 , v 2 ,  , v s   v1 , v 2 , , v s Z ,
 z11  z1s 
Z      
 z s1  z ss 
Z has at least one eigenvalue  and a non-zero eigenvector z ,
Zz  z, z  0
Define v  v1 , v2 ,, v s z , v  0
50
AT v  AT v1 , v 2 , , v s z
 v1 , v 2 , , v s Zz
 v1 , v 2 , , v s z    v1 , v 2 ,, v s z
 v
v T A  v

T
  v I  A B  0, v  0   I  A B  n Contradiction!
T
v  V  v B  0
So
3.
 B, AB, A2 B,, An1 B  n
 I  A, B  n, 
Controllability
(1) Controllable state x1 :
For a given state x1  R ,
n
t1 , u t ,0  t  t1 ,
x0  x1
u t , 0t t1

Finite t1
xt1   0
The controllable set S c  x1 | t1 , ut ,0  t  t1 , x0  x1 , xt1   0
Sc  R n
The system is controllable
(2) Controllability conditions:
0  xt1   e At x1   e At   Bu  d
t1
0
 e At x1  e At  e  A Bu  d
t1
0
 e At  x1   e  A Bu  d 
0


t1
So x1 

t1
0
e  A B u d
A   A, u t1     u 
x1   e A  Bu t1   d
t1
0
x1   e A t1   Bu  d
t1
0
x1 is reachable for the following system
x  A x  Bu
51
Reachability.
Controllable
A, B
Reachable
A, B
 C TRB   n

C TRB  B, A B,  , A n 1 B

 B, AB,  ,  1
n 1
A n 1 B
I
0
n 1
 B, AB,  , A B 
0

0
I 0 0
0  I 0
 CTRB 
0 0
I

0 0 




0
 I 0 0 
0
I 

0  
0
0 



0
0
 C TRB    CTRB 
For a continuous-time system x  Ax  Bu , Controllability
Reachability
For D-T systems, controllability and reachability are NOT equivalent.
(i)
A controllable but non-reachable example:
xk 1  0 xk  0u k
(ii)
Reachability
Controllability
x0  0


xk  m0 Am Bu k-m-1  span B, AB,, An1 B 
k 1


span B, AB,, A n1 B  R n
Reachability
6.2.1 Controllability indicies
1. Definition:
B  b1 , b2 ,, bm   R nm ,  B  m

CTRB  B, AB,, A n1 B


 b1 , b2 ,, bm | Ab1 , Ab2 ,, Abm |  | A n1b1 , A n1b2 ,, A n1bm
How many linearly independent columns does bi contribute?
An example:
52
i

1
0
A
0

0
0 0 0
0

0
0 1 0
, B  b1 , b2   
0
0 0 1


0 0 0
1
 b1

0
0
CTRB  
0
1


Ab2 A 2 b1
1
0
0
1
|
0
0
0
0
b2 Ab1
1 0
0
0
0
0
|
1
0
0

0
1
0

A 3b2 

1 
0 

0 
0 

 
A 2 b2 A 3b1
1
0
0
0
|
0
0
0
0


1  3,  2  1, 1   2  4  n
 CTRB   n  i 1 i  n
m
2. Under a similarity transformation, the controllability indicies do not change.
x  Ax  Bu , B  b1 ,, bm 
x  Px

x  A x  B u, B  b1 ,, bm

A  PAP 1 , B  PB, bi  Pbi

(i) A B  PAP
k
B , A B ,, A
n 1
 PB  PA B
1 k
k
 
B  P B, AB,, An1 B
A, B  is controllable

A, B
is controllable.
b , b ,, b | A b , A b ,, A b |  | A b , A b ,, A b 
 Pb , b ,, b | Ab , Ab ,, Ab |  | A b , A b ,, A b 
n 1
(ii).
1
2
1
m
2
1
m
2
1
m
2
m
n 1
n 1
1
2
n 1
n 1
m
n 1
1
2
m
i  i
3.
Decomposition based on controllability
When the system x  Ax  Bu is not controllable, then

(i). v  span B, AB,, A
(ii).
n 1
B

 B, AB,, An1 B  n1  n .

Av  span B, AB,, A n1 B

n 1
Find n1 linearly independent vectors from B, AB,, A
v1 , v 2 ,  , v n1 .
53


B , which are denoted as




Let P1  v1 , v 2 , , v n1 . It is obvious that
span B, AB ,  , A n 1 B  spanP1  .
v  spanP1 
Av  spanP1 
  1,i 
 
2 ,i 
Avi  v1 , v 2 ,, v n1 
  


 n1 ,i 



A v1 , v 2 ,, v n1

  1,1  1, 2

 2, 2
2 ,1
 AP1  v1 , v 2 ,, v n1 
 


 n1 ,1  n1 , 2


  1,n1 
  2,n1 
 P1 A1

 

  n1 ,n1 
n1 n1
(iii). Find another n  n1  vectors, v n1 1 , v n1  2 , , v n , such that
v1 , v 2 , , v n1 , v n1 1 , v n1  2 ,, v n are linearly independent.

P2  v n1 1 , v n1  2 ,  , v n

 A1 

0
(a). AP1  P1 A1  P1 , P2 
A 
AP2  P1 , P2  12 
 A22 
A
AP1 , P2   P1 , P2  1
0
A
A  P 1 AP   1
0
A12 

A22 
A12 

A22 
B 
B  P1 B1  P1 , P2  1 
0
(b). B  spanP1 
1
(c ). x  P x ,
A
x   1
0
 B1 
A12 
 x   u
A22 
0
 x1 
,
 x2 
(iii) x  
 x1  A1 x1  A12 x2  B1u

x 2  A22 x2

(a). When x2 0  0 , x1  A1 x1  B1u . Is it still controllable?
54
B 
B  P 1 B   1 
0
B , A B ,, A
n 1
 
B  P B, AB,, An1 B

A
A B   P 1  1

0

A12   1  B1  
 P  P   
A22  
 0 

l
l
l
A
P  1
0
A12   B1 
  
A22   0 
 A1l
P 
0
*   B1 

l 
A22   0 
1

l
1
A l B 
 P 1  1 1 
 0 
B , A B ,, A
n 1
B
B P  1
0

1
A1 B1  A1 B1 

0

0 
n
 

n1   B , A B ,, A n 1 B   B1 , A1 B1 ,, A1
n 1
 
So the x1 subsystem is controllable and reachable.
6.3 Observability
1. Unobservable state
 x  Ax

 y  Cx
x 0 is unobservable if
yt   0, t  0
x0  x0
y t   Ce At x0
Ce At x0  0, t
x 0 is unobservable
x0  0
Ce At 0  0, t
We cannot distinguish between an unobservabel state x0  x0 and x0  0 .
The set of all unobservable states
S o  x0 | yt   0, t  0, x0  x0 
S o is a linear space. Why?
The system is observable
S o  0 .
55
n 1
B1   B1 , A1 B1 ,, A1 1 B1

2. Observable condition (Theorem 6.4)
Wo t  is non-singular for any t  0 .
Observable
Proof: (1) Wo t  is non-singular for any t  0
?
e A t C T yt   e A t C T Ce At x0
yt   Ce At x0
T
t
e
AT 
0
T
t
T
C T y  d    e A  C T Ce A d  x0
 0

 Wo t x0
x0  Wo t  e A  C T y d
t
-1
T
0
So yt   0  x0  0
S o  0 , i.e., the system is observable.
(2) The system is observable
?
Suppose t1 ,Wo t1  is singular. We can find v  0 such that
Wo t1 v  0
v T Wo t1 v  0

t1

t1
0
0
v T e A  C T Ce A vd  0
T
Ce A v
2
2
d  0
Ce At v  0, t  0, t1 


dm
Ce At v |t t2  0, m  0,1,2,
dt m
Take t 2  0, t1  . We know that
 
 

Ce At v  0,
C e At2 v  0
 d
Ce At v |t t2  0,
CA e At2 v  0

dt



 n 1
d

Ce At v |t t2  0, CA n 1 e At2 v  0
 dt n 1
For any t , we know






y t   Ce At v  Ce At t2  e At2 v


 
 C l 0 cl t  t 2 A l e At2 v
n 1

 l 0 cl t  t 2 CAl e At2 v
n 1


. Then
 l 0 cl t  t 2 0
n 1
y t   0, t
x0   v  0
x0  v is not observable. So The system is not observable.
56
Contradiction! So Wo t  is non-singular for any t  0 .
Why are we so insterested in x0 ?
The system is governed by x  Ax  Bu . ut  is computed by us and is known. If we know
x0 , then by xt   e At x0   e At   Bu  d , we can know
t
0
xt , t .
3. Duality of observability and controllability.
(i) x  Ax  Bu . Controllable
Reachable
Wr t    e A BB T e A  d is non-singular for any t.
t
T
0
 x  AT x
(ii) 
, observability?
T
y  B x
T
t
Wo t    e  A   C T C e A  d
0
 
t
T T
  e A   B T
0
t
T
B T e A  d
T
  e A BB T e A  d
T
0
 Wr t 
A, B
B
is reachable/controllable
T
4. Equivalent observability conditions
(1) Wo t  

t
0
e A  C T Ce A d is non-singular for any t.
T
 C 
 CA 
 has full column rank.
(2) O  
  
 n 1 
CA 
 A  I 
 has full column rank for any  .
 C 
(3) 
5. Observability indicies
57
, AT
 is observable.
 C1 
  
 C 
q


  
 CA 
 1 
 C1 
  
 
C A
C     ,O   q  ,


 
C q 

 
 

  

n 1 
 C1 A 
  


n 1
C q A 
vi : The # of linearly independent rows contributed by C i

Observability indicies: v1 ,  , v q

Observability index: max vi
i
6. Unobservable space:


S o  x0 | Ce At x0  0, t  0
(i) Ce At x0  C



c t Al x0  l 0 cl t CAl x0 
l 0 l
n 1
n 1

x0  Null O  CAl x0  0, l  0,1,, n  1
Ce At x0  0, t
x0  S o
Null o   S o
(ii) Ce x0  0, t
At


dl
Ce At x0  0, t , l  0,1,2,
dt l
t 0
CAl x0  0, l  0,1,
x0  Null o 
Null o   S o
58
So Null o   S o
7. Observability decomposition:
 O   n2 , dim Null O   dim S o   n  n2

 
 


Select a basis for S o : v n2 1 ,  , v n , v n2 1 ,  , v n  P2

AP2  A vn2 1 ,, vn  vn2 1 ,, vn Ao , Ao  R n  n2 n  n2 
due to
Ax  S o ( CAl x  0l  0,1,, n  1  CAl 1 x  0 )
x  S o
 


Find another n 2 complementary vectors v1 ,  , v n2 , v1 ,  , v n2  P1
A 
AP1  A v1 ,, vn2  v1 ,, vn2 | vn2 1 ,, vn  o 
 A21 

 
A
AP1 , P2   P1 , P2  o
 A21

0

Ao 
CP1 , P2   CP1 , CP2 
Because spanP2   Null O
CP2  0
CP1 , P2   CP1 ,0

A
0
1
 x  Ax x  P x  x  P 1 AP   o
x
 
A
A

o
 21
 y  Cx

y

C
0
x
o



8. Controllability-observability based decomposition
Controllable sub-space: S c ( AS c  S c )
Unobservable sub-space: S o ( AS o  S o )
(1) Decompose S c


Sc  So : n1  n1,1   D, qn1,1 1 ,, qn1  Pco
Sc : n1   D


Sc \ So : n1,1   D, q1 ,, qn1,1  Pco
(2) Decompose S c  R \ S c
n
59
S c  S o : n  n1  n2, 2   D, Pc o
Sc : n  n1   D
S c \ S o : n2, 2   D, Pc o
P  Pco
Pco
(i) Pc  Pco
Pco
Pco , APco  spanPc 
*
Pco    Pco
*
APco  Pco
(ii) Po  Pco
APc o  Pco
Pco 
Pco
Pc o
* 
* 
Pc o  
0 
 
0 
Pco , APco  spanPo 
*
Pc o    Pco
*
Pco
Pc o
0 
* 
Pc o  
0 
 
* 
(iii) APco  spanPc , APco  spanPo  . So
APco  spanPco 
APco  Pco *  Pco
(iv) APc o  Pco
(v) APco
Pco
Pco
Pc o
(vi) B  spanPco
Pco
Pc o
Pc o
0 
* 
Pc o  
0 
 
0 
*
*
Pc o  
*
 
*
Pc o   Pco
Pco
Pc o
Pco 
60
*
*
Pc o 
0

0
0 * 0
* * *
0 * 0

0 * *
B  Pco
*
Pco    Pco
*
B  P 1 B  P 1 Pco
C  CPco
Pco
Pco
 CPco
(vii)
Pco
Pc o

0 Cc o
0
*  * 
*  * 
Pc o     
0  0 
   
0  0 
Pc o   CPco
Pc o
0 CPc o
 Cco
Pc o
* 
* 
Pc o  
0 
 
0 

0
CPco
CPc o
CPc o 
Therefore we get Theorem 6.7 (page 163).

ˆ s   C sI  A
(viii) G

1
B  ? We first compute z  sI  A  B through
1
sI  A z  B
6.5 Conditions in Jordan-form equations
 x  Jx  Bu
,

 y  Cx
J
J  1
0
 J1,1
0 
 0
J 2  
 0
0
J1, 2
0
 1 0 0 
0  1
0 1 0 0 


0
 0 0 1 0 
J 2  

 0 0 0 2 
 b1,1,1 
 B1,1  

 B1    b1,1, 2 
B      B1, 2  
 B2   B  b1, 2,1 
 2  b 
 2,1,1 
We want to verify the controllability condition in Theorem 6.8 (page 165)
vT J  I
Controllable
B  0 has only the solution of v  0 for any  .
(i)
When
  1 &   2 , J  I is non-singular. So we can easily get v  0 .
(ii)
When
  2 ,
61
vT J  2 I
Because
B   v1 v2
1  2

0
v4 
 0

 0
v3
1
0
1  2
0
0
1  2
0
0
0 b1,1,1 

0 b1,1, 2 
0
0 b1, 2,1 

0 b2,1,1 
1  2  0 , then v1  v2  v3  0 .
v4b2,1,1  0
  v4  0
b2,1,1  0 
(iii)
When
vT J  1 I
  1 ,
B   v1 v2
0

0
v4 
0

0
v3
b1,1,1 

0 0
0 b1,1, 2 
0
0 0
0 b1, 2,1 

0 0 2  1 b2,1,1 
1 0
0
We first get v1  0 .
Second,
1  2  0
v4  0
vT B  0  v2b1,1, 2  v3b2,1,1  0
b
1,1, 2
,b2,1,1 are linearly independent.
v2  v3  0
Fig 6.9(page 167): b1,1, 2 can effect both x1 and x2 .
6.6 Discret-time state equations
 xk  1  Axk   Bu k 
yk   Cxk 

Model: 
1. Controllable  reachable
xk  1  0xk   0uk 
It is controllable, but not reachable.
S R R n
2. Reachability condition

S R span B,
AB, ,
An1B

Reachability
Controllability
2. Controllability condition
 An
 
B  An1B   B  An1B

62
controllable.
 An
(1)
 
B  An1B   B  An1B

means
After n steps, any initial state can be moved into 0.
(2) controllable
?
Suppose
 An
 
B  An1B   B  An1B

A  PJP 1 ,
 J1
J   0
 0
0
J2
0
0
1 1 0 


0  , J1   0 1 1 , J 2   2
0
 0 0 1 
J 3 
P  P1
P2
P3  , P1  q1,1
Al  PJ l P 1  P1
 P1
 P1
P2
P2
P2
 J1l

P3  0
0

 J1l

0 0
0

q1, 2
 J1l

P3  0
0

0
J2
0
0
J2
l
0
l
0 1 0 
1
, J 3  0 0 1, 3  0

2 
0 0 0
q1,3 , P2  q2,1
0
J2
0
l
q2, 2 , P3  q3,1
q3, 2
0

0  P 1
l
J 3 
0

0 P 1 , l  n
0
0

l
l
0 P 1 , J1  0, J 2  0
0
 
 
span Al  spanP1 , P2   span An , l  n
Let x0   q1,1 . Due to controllability, we know there exists N1 such that
(i)
AN1 q1,1  l 10 AN1 1l B ul 
N 1
A N1 q1,1  P1
P2
 P1
P2
 J1 N1

0 0
 0

 J1 N1

0 0
 0

0
J2
N1
0
0
J2
N1
0
0

0 P 1q1,1
0
0 v1 
1
 
0  0 , v1  0
0
0  0 
 1 1 q1,1
N
1N q1,1  l 0 AN 1l B ul  spanB  An1B
1
N1 1
1
63
q3,3 

An1B
So q1,1  span B 

(ii) Let x 0   q1, 2 . Due to controllability, we know there exists N 2 such that
AN2 q1, 2  l 20 AN2 1l B ul 
N 1
A q1, 2  P1
N2
 P1
 J1 N 2

0 0
 0

P2
J2
P2
N
N2
0
 J1 N 2

0 0
 0

 1 2 q1,1  N 2 1
N 21
0
0
J2
N2
0
N 2 1
0

0 P 1q1, 2
0
0   v2 
0 
 
0  0 , v2  2
0
0  0 
q1, 2

N 1
N

So q1, 2  span B 
An1B

1  0

(iii) Similarly we can prove that q1,3 , q2,1 , q2, 2  span B 


spanP1 , P2   span B  An1B .
A  P1
n
P2
 J1l

0 0
0

0
J2
0
 
l
0

0 P 1 ,
0


 

span An  spanP1 , P2   span B  An1B
So

q1, 2  1 2 q1,1  l 20 AN2 1l B ul  span B  An1B 
N 2 1
 An
B  An1B   B  An1B .
6.7 Controllability after sampling
We want to verify Theorem 6.9 (page 173) through an example.
1 1

1 1

x  Ax  Bu , A  
1

2


 * 

 * 




, B   B1,3 



1
 * 
 B2, 2 
2 


xk   xkT , ut   uk , kT  t  k  1T
xk 1  Ad xk   Bd uk 
64

An1B . Therefore
e 1T


Ad  




a12
a13
e 1T
a23
e 1T
e 2T



, a12  0, a13  0, a23  0, a45  0

a45 
e 2T 
T
Bd   e A Bd
0


eig  Ad   e 1T , e 2T . Under the condition of Theorem 6.9, e 1T  e 2T .


vT Ad  e 1T I , Bd  0  v  0 ?


v Ad  e I  v1 v2
T
1T
v3
v4
0 a12

0

v5 



a13
a23
0
e 2T  e 1T



0

a45 
e 2T  e 1T 
v1a12  0  v1  0,
So
v1a13  v2 a23  v2 a23  0  v2  0


v4 e 2T  e 1T  0  v4  0



, v3  ?

v4 a45  v5 e 2T  e 1T  v5 e 2T  e 1T  0  v5  0
vT Bd   0 0 v3
0 0e A Bd 
T
0
T

  0 0 v3e 1
0
 * 
 * 


0 0  B1,3  d


 * 
 B2, 2 



T
T
 v3   e 1 d  B1,3 ,   e 1 d   0, B1,3  0controllab ility  ( A, B ) 
 0

 0

So v Bd  0  v3  0 .
T
6.8 LTV state equations
1. Model
 x  At x  Bt u

y  C t x

xt    t , t0 xt0     t , B u  d
t
t0
2. Controllability:
65
xt0   x0  xt    t , t0 x0    t , B u  d  0
t
t0
 1 t , t0  t ,    t0 , t  t ,    t0 , 
x0    t0 , B  u  d
t
t0
Controllability matrix: Wc t0 , t  
  t , B B   t , d
t
T
T
0
t0
0
Theorem 6.11 (page 176)
t ,Wc t0 , t  is non-singular.
The system is controllable at time t 0
Wc t0 ,   is non-singular.
Proof: (1) t1 ,Wc t0 , t1  is non-singular.
x0 , u   BT   T t0 , Wc t0 , t1 x0 .
1
Then xt1   0
(2) The system is controllable at time t 0
?
Suppose Wc t0 , t  is singular for any t .
t1  t2  Wc t0 , t1   Wc t0 , t2 
vT Wc t0 , t1 v  0  vT Wc t0 , t 2 v  0
vT Wc t0 ,  v  0 
vT Wc t0 , t v   vT  t0 , B B T   T t0 , vd
t
So
t0
  vT  t0 , B 
t
t0
 0,
2
2
d ,
t
vT  t0 , B   0, 
Let x0  v  0 . The system is controllable, then t1 , u ,
v    t0 , B  u  d
t1
t0
66
vT v   vT  t0 , B  u  d  0
t1
t0
v  0 . Contradiction!
2. Theorem 6.12 (teach yourselves).
3. Example
1
 1
x  u is controllable
2 1
(1) x  

(2)
 et 

x

 2t u
2
e 
1
x  

 t ,   e
At  
et 



 et 
,  t , B    2t 
e 2t   
e 
Wc t0 , t     t0 , B BT   T t0 , d
t
t0
t
t e0 
   2t  et0 e 2t0 d
0
t0
e 
e 2 t 0 e 3t 0 
 t  t0  3t

0
e 4t0 
e


For any t , Wc t0 , t  is singular.

Actually,  e 0
t

1 Wc t0 , t   0 . So the system is NOT controllable.
4. Observability: a necessary and sufficient condition is
Wo t0 , t     T t0 , C T  C   t0 , d is non-singular for some t .
t
t0
Wo t0 ,   is non-singular.
Homework: 6.3, 6.4, 6.6,6.9, 6.10, 6.12, 6.16, 6.19, 6.20, 6.21, 6.22。
67
Chapter 8. State feedback and state estimators
8.1 Introduction
1. Control problem formulation:
r
u
y
Plant
Design u so that y follows r as closely as possible.
2. Solutions:
(1) Open loop
r
controller
u
y
Plant
(2) Closed loop/feedback
r
controller
u
Plant
y
A closed-loop controller is usually more robust to parameter variation, noise, etc.
How can we design a good closed-loop/feedback controller? It is the main task of this chapter.
8.2 State feedback
1. Plant (SISO):
 x  Ax  bu
, x  Rn , u  R

 y  cx
A state feedback controller:
u  r  kx  r  k1 k2  kn x .
2. Controllability is preserved under any state feedback: Theorem 8.1 (page 232)
 A  bk , b
Proof: (1)
Suppose
 A, b
is controllable for any state feedback gain k
 A, b
is controllable
 A  bk , b
is controllable.
?
is NOT controllable under a state feedback gain k . Then
 , v  R n , v  0  vT  A  bk   I  b  0 . So
vT  A  bk   vT  vT A  vT bk  vT  vT 

   A, b 
vT b  0
vT b  0


Contradiction! So
 A  bk , b
is
NOT
is controllable for any state feedback gain k .
68
controllable.
(2)
 A  bk , b
is controllable for any state feedback gain k
 A, b
We can choose k  0 . Then we get
is controllable.
3. Effects of state feedback.
(1) State feedback may change observability
Example 8.1(page 233): The original observable system becomes unobservable after the state
feedback.
1 3 1
 x   u
3 1 0
(2) Example 8.2 (page 234): x  
The original characteristic polynomial is s   s  4s  2 , poles (eigenvalues): 4, -2.
State feedback: u  r  k1
k2 x . The closed-loop system equation becomes
1  k1 3  k 2  1
x  
x
r
1  0
 3
The new characteristic polynomial is  f s   s  k1  2s  3k2  k1  8 .
2
By choosing k1 , k2 appropriately, eigenvalues (poles) of A f  A  bk  can be arbitrarily
assigned (too good to be true).
4. Arbitrarily assign the poles of the closed loop system.
Theorem 8.2 (page 234), Theorem 8.3 (page 235).
 x  Ax  bu
, x  R n , u  R, n  4 . The system is controllable.

 y  cx
s   s 4  1s 3   2 s 2   3 s   4 .
By a state feedback u  r  kx  r  k1
k2
k3
k4 x , we can obtain any desired
closed-loop characteristic polynomial:
 f s   s 4  1s3   2 s 2  3s   4
Proof: We introduce a similarity transformation different from that in the book. A constructive
method.

(1) CTRB  b
Ab
A2 b

A3 b is nonsingular.
69
T

Solve v b
Ab
A2b
0 
0 
3
A b    . The solution exists and is unique.
0 
 
1

 vT b  0
 T
 v Ab  0
.
 T 2
v A b  0
 vT A3b  1
vT A3 
 T 2
v A
Let P   T  . P is non-singular because
v A
 T 
 v 
 v T A3 
 T 2
v A
P  CTRB   T  b
v A
 T 
 v 
1 * *
0 1 *

0 0 1

0 0 0

Ab
A2b
 v T A 3b v T A 4 b v T A 5 b
 T 2
v A b vT A3b vT A4b
3
A b  T
 v Ab vT A2b vT A3b
 T
vT Ab vT A2b
 v b

*
*
 a non - singular matrix
*

1
A  PAP 1 
v T A 4     1   2   3
 T 3 
1
0
0
v A
A P  PA   T 2   
v A   0
1
0
 T  
0
1
 v A   0
  1   2   3   4 
 1
0
0
0 

P
 0
1
0
0 


0
1
0 
 0
  4   v T A3 


0  vT A2 
0   vT A 


0   vT 
A4  1 A3   2 A2   3 A   4 I (Cayley-Hamilton Theorem)
A  PAP1 
  1   2
 1
So
0
A
 0
1

0
 0
 3
0
0
1
4 
0  .
0 

0 
70
v T A6 b 

v T A5 b 
v T A4b 

vT A3b 
vT A3b  1
 T 2   
v A b  0 
1
, c  cP  1
b  Pb   T

 v Ab  0
 T   
 v b  0
2
3
4 
(2) Under the state feedback u  r  kx  r  k1
k2
k3
 x  A x  b u

 y  cx
k4 x ,
 x  Af x  b r
,

 y  cx
 1  k1
 1
Af  A  b k  
 0

 0
  2  k2
0
  3  k3
0
1
0
0
1
  4  k4 

0


0

0

The new characteristic polynomial is
 s  1  k1   2  k 2  3  k3  4  k 4 

1
s
0
0 
 f s   sI  A f  

0
1
s
0 


0
0
1
s  44

 s  1  k1   2  k 2  3  k3 
n 1
 s  
1
s
0   (1)1 n  4  k 4   1

0
1
s  33
 s  1  k1   2  k 2  3  k3 
 s  
1
s
0    4  k 4 

0
1
s  33
  s  1  k1   2  k 2 

   4  k 4 


 ss 



k
3
3

 


1
s
 22


 s  s  s  s  1  k1    2  k 2    3  k3    4  k 4 
 s 4  1  k1 s 3   2  k 2 s 2   3  k3 s   4  k 4 
By choosing ki   i   i , we can get any desired  f s   s  1s   2 s  3 s   4 , i.e.,
4
any poles can be achieved.

(3) Compute gˆ s   c sI  A

1
b . Let sI  A  b  z . Then
1
71
3
2
 s  1  2  3  4   z1  1
 1
s
0 0   z2  0

b  sI  A z 

 0
1 s
0   z 3  0 

   
0  1 s   z 4  0 
 0
So
z3  sz 4 , z2  sz3 , z1  sz 2

1
1

  z4  4
3
2
s  1 z1   2 z2   3 z3   4 z4  1
s  1s   2 s   3 s   4 s 
s3 
 
1
1 s 2 
sI  A  b    .
s  s
 
 1 

So gˆ s   c sI  A

1
b
1s 3   2 s 2   3 s   4
.
s 4  1 s 3   2 s 2   3 s   4
Under the state feedback,
 x  Af x  b r

 y  cx

Similarly, we get gˆ f s   c sI  A f

1
b
1s 3   2 s 2   3 s   4
.
s 4  1 s 3   2 s 2   3 s   4
We can see the nominator of the transfer function is not changed.
5. How can we choose the target closed-loop poles?
8.2.1 Solving a Lyapunov equation (to achieved the desired poles)
1. Procedure 8.1 (SISO) (page 239)
(1). F: whose eigenvalues are the desired poles. F and A have no common eigenvalue.
1n
(2). k  R
: (k , F ) is observable.
(3). Solve AT  TF  bk w.r.t. T.
72
1
(4). Feedback gain: k  k T .
When T is non-singular,
A  bk  A  bk T 1
 A   AT  TF T 1
 A  ATT 1  TFT 1
. So A bk has the same eigenvalues as F .
 TFT 1
2. T is non-singular. Theorem 8.4 (page 240)
 A, b

is controllable, k , F
 is observable and F and A have no common eigenvalues
T is non-singular.
Proof: (1). The Lyapunov equation has a unique solution T because F and A have no common
eigenvalues.
(2). Suppose n  4 . The characteristic polynomial of A is
s   s 4  1s 3   2 s 2   3 s   4
 A  0 . But F  is non-singular. The reason is shown below.
F  PJ F P 1 .
F   PJ F P 1 . J F  is an upper triangular matrix whose diagonal
elements are
 F ,i , all of which are non-zero because F and A have no common eigenvalues.
(3).
IT  TI
AT  TF  bk
A2T  ATF  bk    AT F  Abk  TF  bk F  Abk  TF 2  Abk  bk F
 
A T  AA T   ATF
A3T  A A2T  ATF 2  A2bk  Abk F  TF 3  A2bk  Abk F  bk F 2
4
3

 AT  TF   b

 TF   b
Ab
3
 A3bk  A2bk F  Abk F 2  TF 4  A3bk  A2bk F  Abk F 2  bk F 3
Ab
A2b
A2b
 3  2 1

1 1
3  2
Ab
1 1 0

1 0 0

 3  2 1
 
1
1
A3b  2
1 1 0

1 0 0

1  k 


0  k F 
0  k F 2 


0  k F 3 
1  k 


0  k F 
0  k F 2 


0  k F 3 
The right side square matrix is non-singular. So T is non-singular.
F, k  can be a Jordan form, an observable form, etc.
73
8.3 Regulation and tracking
1. (1). Regulator problem: To find a state feedback controller so that the response due to the initial
condition will die out at a desired rate.
(2). Tracking: The response yt  approaches the reference signal r t  . Asymptotic tracking of
a step reference signal; servomechanism problem: time-varying r t  .
2. Mathematical description
 x  Ax  bu
, x  Rn , u  R
 y  cx
(1) Model: 
(2). Regulation: Control law: u  r  kx
The response due to x0 : yt   ce
The eigenvalues of
 A  bk 
 Abk t
x0
determine the convergence rate.
(3).Tracking: control law: u  pr  kx .
Closed-loop transfer function: gˆ f s   p
The eigenvalues of
 A  bk 
1s 3   2 s 2   3 s   4
s 4  1 s 3   2 s 2   3 s   4
lie within a certain region so that the tracking rate can be
guaranteed.
For r t   a, t  0 ,
lim y t   gˆ f 0 a  p
t 
4
a
4
t 
In order to achieve asymptotic tracking ( y t   a ), we need
p
4
, 4  0
4
8.3.1 Robust tracking and disturbance rejection
Although p 
4
can achieve asymptotic tracking, it strongly dependes on the parameters
4
 4 ,  4 . What happens if there exists parameter variation?
Moreover, it cannot reject disturbance, particularly a step disturbance.
So we need to investigate a robust one.
1. Model
74
 x  Ax  bu  bw
, w: a constant disturbance with unknown magnitude.

y  cx

2. Solution:
Design k, k a to achieve robust tracking and disturbance rejection.
The closed-loop equation
 x   A  bk bka   x  0 b

r
w
   
0   xa  1 0
 xa    c

x

y  c 0 

 xa 
3. Theorem 8.5 (page 244). If
 A, b
is controllable and gˆ s   csI  A b has no zero at
1
 A  bk bka 
can be assigned arbitrariliy by selecting k, k a .
0 
 c
0, then all eigenvalues of 
 A  bk bka   A 0 b

 k
0   c 0 0
 c
Proof: 
k a   Aˆ  bˆk
We prove the theorem through showing the controllability of
Consider a controllability canonical form of
  1
 1
A
 0

 0
2
0
 3
0
1
0
0
1
ka 
Aˆ , bˆ .
 A, b ,
4 
1

0 
0 
, b   , c  1
0 
0 

 
0 
0 
2
 3  4  , 4  0
v4
v5 
(1) We investigate the following equation

vT Aˆ  I

vT Aˆ  vT
bˆ  0   T
, v  v1 v2
ˆ
 v b0
75
v3
T
vT bˆ  v1 
T
T
  v1  0 , together with v Aˆ  v , yields
T ˆ
v b  0
 v2  1v5  v1  0
 v   v  v
3
2 5
2

 v4   3v5  v3

 4 v5  v4


  0, or
0  v5  
 v5  0

  0, v5  0  v4  v3  v2  v1  0
  0, v5  0   4v5  v4  0
  v5  v4  v3  v2  v1  0
4  0



Aˆ , bˆ is controllable
ˆ  I
So v A
T
(2) Actually
Suppose
 
bˆ  0 has no non-zero solution v and the system Aˆ , bˆ is controllable.
4  0
4  0 . We choose vT  0  1   2   3 1. We can verify that

vT Aˆ  vT
,   0 , i.e., vT Aˆ  I
 Tˆ
 v b0

bˆ  0 .
 A  bk bka 
. At least, we
0 
 c
(3). By Theorem 8.3, we can arbitrarily assign the eigenvalues of 
 A  bk bka 
is asymptotically stable.
0 
 c
can require that 
4. Verify the robust tracking and disturbance rejection
(1) Equivalent system
76
1
gˆ s   csI   A  bk  b 
1s 3   2 s 2   3 s   4
N s 

4
3
2
s  1s   2 s   3 s   4  s 
 s   s 4  1s 3   2 s 2   3 s   4
N s   1s 3   2 s 2   3 s   4 ,  4  0
The Laplace transform of y(t) is
yˆ s   gˆ yr s rˆs   gˆ yw wˆ s 
k a N s 
ka ˆ
g s 
k a N s 
s  s 
(i) gˆ yr s   s


k
k N s  s s   k a N s 
1  a gˆ s  1  a
s
s  s 
N s  |s 0   4  0
gˆ yr 0 
ka N 0
k
 a 4 1
00  ka N 0 ka  4
gˆ yw s  
gˆ s 
w
, wˆ s  
ka ˆ
s
1  g s 
s
w gˆ s 
ˆ s  

(ii) yˆ w s   gˆ yw s w
s  k a gˆ s 
lim y w t   lim syˆ w s   lim
t 
s 0
s 0
N s 
w N s 
 s 

N s  s s   k a N s 
s  ka
 s 
w
sw N s 
0  w N 0 

0
s s   k a N s  0  0   k a N 0 
 A  bk bka 
has an eigenvalue of 0.).
0 
 c
Note that k a  0 (if k a  0 , then 
5. Why can we achieve such good robust tracking and disturbance rejection?
We include the input model and the disturbance model into the controller, which is referred to as
“Internal model principle”:
Overall controller
r
Controller
Input & disturbance
model
8.3.2 Stabilizability.
77
y
Plant
When the system is not controllable, some eigenvalues of the original system cannot be modified,
which are called “uncontrollable eigenvalues”. When these uncontrollable eigenvalues are stable,
the system is said to be “stabilizable” because we can choose an appropriate state feedback gain to
move the controllable eigenvalues to stable positions.
8.4 State estimator
1. Motivation: state feedback can do a good job (arbitrarily assign the poles)
u  r  kx
But what will happen if we don’t have access to x?
(1) Some state variables cannot be directly measured.
(2) State sensors are too expensive.
2. State estimator
 x  Ax  bu
, x  Rn , u  R
y

cx

Original system: 
xt   e At x0   e At  bu d .
t
0
It is usually assumed that the system parameters, A,b,c, are known. So the only unknown is x(0).
(1) Open loop estimator:
 xˆ  Axˆ  bu
t
, xˆt   e At xˆ 0   e At  bu d
0
 xˆ 0   x0 
(i) 
We have xˆ t   xt  . It is great.
(ii). If xˆ 0  x0  x0 , then
et   xˆ t   xt   e At xˆ 0  x0  e At x0
If A has unstable eigenvalues, e(t) may diverge to  .
(2) closed-loop estimator:
xˆ  Axˆ  bu  l  y  cxˆ 
Define et   xt   xˆt  . Then
78
et   x t   x̂ t    Ax  bu    Axˆ  bu  l  y  cxˆ 
 A x  xˆ   l cx  cxˆ 
  A  lc e
So et   e
 Alc t
e0 .
t 
If eigenvalues of A lc are all stable, then et   0, even when A is unstable.
Can we guarantee the stability of A lc ?
4. Theorem 8.O3 (page 250)
All eigenvalues of A lc can be arbitrarily assigned by selecting l
Proof: c, A is observable
A , c 
T
A
T
T
c, A
is observable.
is controllable.

 cT k can have any eigenvalues by k
A  k c 
T
can have any eigenvalues by k
T
We choose l  k . Then All eigenvalues of A lc can be arbitrarily assigned by setting l / k .
5. Procedure 8.O1 (page 250)
We can find an appropriate l by solving a Lyapunov equation.
8.4.1 Reduced-dimensional state estimator
Any observable system can be transformed into the observable canonical form with
y  1 0  0x  x1
So we have already known x1 , and just need to estimate x2 , , xn . A reduced-dimensional
( (n-1)-D) observer is to be designed.
1. One design method: Procedure 8.R1 (page 251)
Its validity can be guaranteed by Theorem 8.6 (page 252)
2. Another reduced-dimensional estimator:
We assume the system takes the observable canonical form and
x 
x   1 , x1  R, x2  R n1
 x2 
 x1   A11 A12   x1   b1 
   
     u
 x2   A21 A22   x2  b2 

x 

y  1 0 1   x1

 x2 
79
(i) The first state equation:
x1  A11x1  A12 x2  b1u
Introduce a new “virtual” output z  A12 x2 , which can be computed as
z  x1  A11x1  b1u•(like a real output)
(ii) The second state equation is
x2  A21x1  A22 x2  b2u
 A22 x2  u
, u  A21x1  b2u is known.
Then we get an (n-1)-D system
 x2  A22 x2  u

 z  A12 x2
Claim:
A12 , A22 
Proof: If
is observable.
A12 , A22 
is NOT observable.Then there exist
 , v2  R n1 , v2  0 such that
A12v2  0, A22v2  v2
0
 v2 
Let v     0 . We see that
0
cv  1 0   0
v2 
A
Av   11
 A21
A12   0   A12v2   0 
0

        v




A22  v2   A22v2  v2 
v2 
So c, A is not observable. Contradiction!
 x2  A22 x2  u
, we construct a full-dimensional state estimator as
 z  A12 x2
For the observable system 
xˆ2  A22 xˆ2  u  l2 z  A12 xˆ2 
t 
By selecting l 2 , we can guarantee that x2 t   xˆ2 t   0 .
(iii) With achieved xˆ2 t  , together with x1 t   yt  , we get
 y t  
xˆ t   

 xˆ2 t 
80
(iv). The reduced-dimensional state estimator is
xˆ2  A22 xˆ2  u  l2 z  A12 xˆ2 
 A22 xˆ2   A21x1  b2u   l2 x1  A11x1  b1u  A12 xˆ2 
  A22  l2 A12 xˆ2   A21  l2 A11  y  b2  l2b1 u  l2 y
It is not easy to compute y . To resolve this issue, define fˆ t   xˆ2 t   l2 yt  . Then

fˆ  xˆ2  l2 y   A22  l2 A12 xˆ 2   A21  l2 A11  y  b2  l2b1 u  l2 y  l2 y
  A22  l2 A12 xˆ2   A21  l2 A11  y  b2  l2b1 u


  A22  l2 A12  fˆ  l2 y   A21  l2 A11  y  b2  l2b1 u
  A22  lA12  fˆ   A22  l2 A12 l2   A21  l2 A11  y  b2  l2b1 u
Now no derivative is needed. In summary,
Instead of x̂2 , we compute fˆ ;
xˆ2 t   fˆ t   l2 yt 
 y t  
xˆ t   

 xˆ2 t 
8.5 Feedback from estimated states
 x  Ax  bu
,
 y  cx
1. (1) Ideal state feedback: 
u  r  kx
(2). state estimator:
xˆ  Axˆ  bu  l  y  cxˆ 
(3) Feedback the estimated state
u  r  kxˆ
2. Systems to construct control with estimated state
81
x  Ax  br  kxˆ 

 xˆ  Axˆ  br  kxˆ   l  y  cxˆ 


 y  c 0 x 
 xˆ 

 
Let e  x  xˆ . We get
bk   x  b
 x   A  bk


r
 e   0
A  lc   e  0
  
 x
y  c 0 
e 
Eigenvalues of the overall system:
  A  bk     A  lc 
The transfer function:

 A  bk
gˆ yr s   c 0 sI  
 0

 csI   A  bk  b
1
bk   b

A  lc   0
1
The estimation error has no effect on the transfer function. Why?
When we compute a transfer function, we have already assume that all initial conditions are 0. So
e0  0  x0  xˆ0 . Then x0  xˆ0
xt   xˆt 
The state e(t) is uncontrollable.
8.6 State feedback – Multivariable case
1. Model
 x  Ax  Bu
, x  Rn , u  R p

 y  Cx
Controller: u  r  Kx
Closed-loop system:
 x   A  BK u
,

y  Cx

2.
Theorem 8.M1 (page 255)
 A  BK , B
is controllable for any feedback gain K
A, B
is controllable.
PBH criterion.
3. Theorem 8.M3 (page 256)
A, B
is controllable.
K ,  A  BK  can place eigenvalues at any desired positions.
It is not easy to design K. We will spend much on that.
82
8.6.1 Cyclic design
1. Definition: A is cyclic if its characteristic polynomial equals its minimal polynomial.
A is cyclic
For each eigenvalue of A, there is only 1 Jordan block.
2. Theorem 8.7 (page 256) If A is cyclic and (A,B) is controllable, then for almost any vector
v  R p1 , (A, Bv) is controllable.
Proof: Consider Theorem 6.8 (page 165). Due to the cyclic requirement, we have only
b  0
bl11 , bl 21 . We know that  l11
bl 21  0
bl11v  0
. By Theorem 6.8, (A, Bv) is controllable. Then we can arbitrarily
bl 21v  0
For almost any v, 
assign the poles, i.e,
A Bvk 
1n
can have any eigenvalues, where k  R .
3. By 2, we know there almost surely exist v such that (A, Bv) is controllable. Let K  vk .
Then we can almost move the poles of
mathematical multiplicity of
A Bvk 
A Bvk 
everywhere. The probability the
is larger than 1 is 0. So we get Theorem 8.8 (page
257).
4.
A BK 
 A  BK , Bv 
is almost cyclic.
is almost controllable.
A  BK  Bvk 
almost have any poles.
8.6.2 Lyapunov equation method:
We solve a Lyapunov equation to get the gain F.
Procedure 8.M1.
Theorem 8.M4:
F, K 
Controllable (A,B) and observable

Note that B
AB
A2 B
are only necessary for the non-singularity of T.

A3 B is not square.
8.6.3 Canonical-form method.
1. Example: n  6, 1  4, 2  2 .
 x  Ax  Bu
, B  b1 b2  .

 y  Cx

M  b1
Ab1
A2b1
A3b1 | b2

Ab2 is non-singular.
83
 v1T 
b
T 1
v 2 

Solve 
A2b1
Ab1

0 0 0 1 0 0 
Ab2  
 (unique solution)
0 0 0 0 0 1 
A3b1 | b2
v1 b1  0, v1 Ab1  0, v1 A2b1  0, v1 A3b1  1
T
T
T
v2 b2  0, v2 Ab2  1
T
Then
T
T
v2 b1  0, v2 Ab1  0, v2 A2b1  0, v2 A3b1  0
T
T
T
T
.
v1 b2  0, v1 Ab2  0
T
T
 v1T A3 
 T 2
v1 A 
 v1T A 
Let P   T  . P is non-singular because
 v1 
v TA
 2T 
 v 2 
v1T A3 
 T 2
v1 A 
vTA
PM   1 T  b1
 v1 
v TA
 2T 
 v2 

A2b1
Ab1
1
0

0
Ab2  
0
0

0

A3b1 | b2
* * * * *
1 * * * *
0 1 * 0 *

0 0 1 0 0
0 0 * 1 *

0 0 0 0 1
is
non-singular.
x  Px . Then
Similarity transformation
 x  A x  B u
,

y

C
x

 v1T A3 
 T 2
v1 A 
 v1T A 
B  PB   T b1
 v1 
v T A
 2T 
 v 2 
Because
A2b2
v
T
1 b12 
0 0 


0 0 
T 2
b2   
 . Why v1 A b2  0 ?
0 0 
0 1 


0 0 
is
a
linear
combination
of

b  0, v1 b2  0, v1 Ab1  0, v1 Ab2  0, v1 A2b1  0 .
T
1 1
T
T
Generally, if 1   2 , v1 A
T
1 1l
T
b2  0l  1,2, , 1  1
84
b , b , Ab , Ab , A b 
2
1
2
1
2
1
and
A  PAP1 . So
 v1T A4    111  112
 T 3 
0
 v1 A   1
T 2
v A   0
1
A P  PA   1 T   
0
 v1 A   0
T
2
v A     211   212
 2T  
0
 v 2 A   0
  111  112
 1
0

1
 0
So A  
0
 0
  211   212

0
 0
 113
0
 114
0
 121
0
0
1
0
0
0
0
  213
  214
  221
0
0
1
 122 
0 

0 

0 
  222 

0 
 113
 114
 121
0
0
0
0
0
0
1
0
0
  213   214
0
  221
0
1
 122   v1T A3 


0  v1T A2 

0   v1T A 


0   v1T 
  222   v 2 T A 


0   v 2 T 
Let u  r  K x ,
1
 121
 122 
1 b12  111  111 112  112 113  113 114  114
K 
 
  212
  213
  214
 221   221  222   222 
0 1     211
 121
 122 
1  b12  111  111 112  112 113  113 114  114



  212
  213
  214
 221   221  222   222 
1     211
0
1 b12 
1
0 0 
0



0 0  1  b12  0


  0
0
0
0
1

 


0 1 
0



0 0 
0
0
0
0

0
1

0
85
  111  112  113
 1
0
0

 0
0
1
A  BK  
1
0
 0
  211   212   213

0
0
 0
112  112
  
  111 111
  212
   211
 111  112  113
 1
0
0

 0
0
1

1
0
 0
 0
0
0

0
0
 0
 122  1 0
0  0 0
0
0
0  0 0 
0
0


0  0 0 
0
0
  214   221   222  0 1

 
0  0 0
1
0
 121
113  113 114  114
 121
 114
  214
  213
 114
0
0
0
0
0
0
0
0
  221
0
1
0 
0 
0 

0 
  222 

0 
The characteristic polynomial of A  B K is


s   s 4  111s 3  112s 2  113s  114 s 2   221s   222

8.6.4 Effects of state feedback on the transfer matricies. (page 260)
8.7 State estimator – Multivariable case (page 263)
How about the reduced-dimensional estimator?
When C has full row rank, then find T so that
y
C 
P    is non-singualr. x   
T 
 x2 
8.8 Feedback from estimated states – Multivariable case (page 265)
Same.
Homework: 8.1, 8.4, 8.6, 8.9, 8.10, 8.11, 8.13
86
 221   221
 122

 222   222 
Chapter 7 Minimal realizations and coprime fractions
7.1 Introduction
1. An LTI system:
I/O model: yt  
 Gt  u d , Gˆ s   LGt 
t
0
The system is lumped or Ĝ s  is rational proper.
 x  Ax  Bu
 y  Cx  Du
State-space model: 
Definition: Ĝ s  is realizable if the transfer matrix of a state-space equation is equal to Ĝ s  .
2. Why do we need to realize Ĝ s  by a state-space equation?
(i) It is more efficient and accurate to compute the response from the state-space equation than the
convolution equation.
(ii) After realizing Ĝ s  into a state equation, we can use op-amp circuits to implement it.
(physically realize Ĝ s  )
3. Many sate equations may have the same transfer matrix (Examples 4.6, page 103; 4.7, page
105). Which one should we use?
Minimal-dimensional or minimal one with the lowest order.
How to find the minimal realization? Our task to go.
7.2 Implications of coprimeness (SISO)
1. The decomposition of a proper rational transfer function
gˆ s   gˆ    gˆ sp s 
gˆ sp s  : a strictly proper transfer function. If we can realize gˆ sp s  , we can easily realize ĝ s 
by defining the D matrix as ĝ  . So we will focus on only the strictly proper transfer functions
in the sequel.
2. A simple realization of a strictly proper transfer function
gˆ s  
1s 3   2 s 2   3 s   4
N s 

Ds  s 4  1s 3   2 s 2   3 s   4
Ds  has degree 4 and is monic (its leading coefficient is 1).
(1). yˆ s  
N s 
uˆ s 
Ds 
Let vˆs   D
1
s uˆs  . Then we have
87
Ds vˆs   uˆ s 
yˆ s   N s vˆs 
(2). Define the state variables as
s3 
 x1 t  v 3 t 
 2
 x t   2  
v t 
s
2



ˆ
, i.e., xs     vˆs  .
xt  
 1
s
 x3 t   v t 

 

 
 x4 t   vt  
 1 
 x2  x1

Then  x3  x2
 x  x
3
 4
We need to know x1 . By Ds vˆs   uˆs , we get
s 4 vˆs   1s 3vˆs    2 s 2vˆs    3 svˆs    4 vˆs   uˆ s 
sxˆ1 s   s 4vˆs 


So   1s vˆs    2 s vˆs    3 svˆs    4 vˆs   uˆ s  . Therefore,
3
2
 1 xˆ1 s    2 xˆ2 s    3 xˆ3 s    4 xˆ4 s   uˆ s 
x1  1 x1   2 x2   3 x3   4 x4  u
(3).
yˆ s   1s 3vˆs    2 s 2vˆs    3 svˆs    4vˆs 
 1 xˆ1 s    2 xˆ2 s    3 xˆ3 s    4 xˆ4 s 
y  1
2
3
 4 x
(4). We get a realization
 1   2
 1
0
x  Ax  bu  
 0
1

0
 0
y  cx  1
 2 3  4 x
 3
0
0
1
  4  1
0  0
x
u
0  0 
  
0  0 
(5). Check its controllability:
Solve the equation v A  I
T
 vT A  vT
 T
v b  0  v1  0
b  0, i.e.,
vT A  v2
0  vT   v1 v2
v3 v4
v3 v4 
v2  v1  0  v3  v2  0  v4  v3  0
So the equation only have a solution of zero for any
v2  v1
 . The system is controllable.
(6). Check observability, which is related to coprimeness between N(s) and D(s).
(i). Definition (coprimeness): Two polynomials are coprime if they have NO common factor of
88
degree at least 1.
A polynomial R(s) is a common factor of D(s) and N(s) if
 Ds   D s Rs 

 N s   N s Rs 
Where D s , N s  are polynomials.
(ii) Definition (Greatest Common Divisor, gcd) R(s) is the gcd of D(s) and N(s) if
(a). R(s) is a common factor/divisor of D(s) and N(s)
(b).Any common divisor of D(s) and N(s), R s  , can divide R(s).
(iii) D(s) and N(s) are coprime if and only if
deggcd Ds , N s   0
(iv). Theorem 7.1 (page 187) c, A is observable
Proof: (a) observability
D(s) and N(s) are coprime.
coprimeness
Suppose D(s) and N(s) are not coprime. Then there exists
1
 N 1   113   2 12   31   4

4
3
2
 D1   1  11   2 1   31   4
Let v 

3
1
  1
 1
Av  
 0

 0
cv  1
2

T
12 1 1 . We can verify that
2
 3
0
0
1
0
0
1
3
13 
  4  13  14 
   
 2
0  12  13 

 2  1  1 
 
0   1  1 
 1
   
0   1   1 
 1 
13 
 2

 4  1   N 1   0
 
 1
 1 
  A  1I  
 A  1 I 

v

0
has
at
least
non-zero
solution.
We
know


   n  4 . The

c
c




So 
system is, therefore, NOT observable. Contradiction!
(b) comprimeness
observability.
Suppose the system is not observable. Then there exist
 Av  v  0

 cv  0
89
, v  v1 v2 v3 v4 T  0 such that
  4   v1 
 v1 
v 



0  v 2 
  2
v3 
0  v3 
0
1
 
 
0  v 4 
1
0
v 4 
. So
 1v1   2 v2   3v3   4 v4  v1 


v1  v2





v2  v3


v3  v4


  1
 1
0  Av  v  
 0

 0
 v3  v4

2
v2   v4
 v  3v
4
 1
2
0
 3
0
v4  0
0  1v1   2 v2   3v3   4 v4  v1
 13v4   2 2 v4   3v4   4 v4  3v4 . So
  D v4
D   0
cv  0  0  1v1   2v2   3v3   4 v4  13v4   22 v4   3v4   4 v4  N  v4
N    0
Then we know from D   0 and N    0 that D(s) and N(s) has the common factor of
s   
which contradicts their coprimeness.
(7) Observable form (page 188)
(8) Equivalently transformed realization
 1   2
 1
0
x  Ax  bu  
 0
1

0
 0
y  cx  1
 2 3  4 x
0
0
Let x  Px, P  
0

1
 3
0
0
1
  4  1
0  0
x
u
0  0 
  
0  0 
0 0 1
0 1 0
. Then
1 0 0

0 0 0
90
1
0
0 
 0
0 
 0

0 
0
1
0 
x  
x   u
 0
0 
0
0
1 


 
1
   4   3   2  1 
y  c x   4  3  2 1 x
We can also get a transformed observable form (page 189).
7.2.1 Minimal realizations
1. Coprime fraction based on the fraction of polynomials
gˆ s  
N s 
, gcd N s , Ds   Rs 
Ds 
gˆ s  
N s Rs  N s 

, N s , D s  are coprime. We get a coprime fraction of ĝ s  .
D s Rs  D s 
D s  : a characterisitc polynomial of ĝ s  . degD s   deggˆ s  .
If D s  is required to be monic, then D s  is unique.
s  1s  1
s2 1

3
4 s  1 4s  1 s 2  s  1
s  1 coprime
Example: 
4 s2  s 1
gˆ s  






1
1
 s 
4
4
 2
monic, coprime
s  s 1


2. Minimality condition: Theorem 7.2 (page 189) ĝ s 
 A, b, c, d 
is a minimal realization of ĝ s 
 A, b
is controllable, c, A is observable.
dim  A  dim gˆ s 
Proof: (1) minimality
controllability + observability
(i) If a realization is minimal, then it is both controllable and observable. Otherwise, we can do
ĝ s  with lower order.
controllability/observability decomposition to realize
 A A12 
 B1 
x   1
 x   u
0
 0 A22 
y  C1 C2 x
gˆ s   C1 sI  A1  B1
1
(ii) If the system is both controllable and observable, then
91
x  A1 x  B1u
y  C1 x
 c 
 cA 
n 1
 are non-singular (they are square matrices).
Ab  A b , OBSV  
  
 n 1 
cA 


CTRB  b
Suppose a minimal realization has a lower order n1  n .
x  A x  b u
y  cx  du



 gˆ s   d  csI  A1 b  d  cb s 1  cAb s 1  cA2b s 1  

1
1
1
2
1
 gˆ s   d  c sI  A  b  d  c b s  c A b s  c A b s  

So d  d , cA b  c A b , m  0,1,
m
 cb

 cAb
 
 n 1
c A b
m
c A n 1b   cb
cAb
 
2
n
c A b  c A b   cAb cA2b



   

 n 1
n
2n2 
c A b  c A b  cA b cAnb
cAb
 c 
 cA 

b
  
 n 1 
c A 


 c 
 cA 
n 1
b
Ab  A b  
  
 n 1 
cA 


cAn 1b 

 cAnb 

 

 cA2 n 2b

Ab  An 1b

O : OBSV , C : CTRB


O C  OC . Because O is non-singular,  OT O  n . We get

OT O C  OT OC

1
C  OT O OT O C .
C is the product of OT O  OT , O , C .
1
 
So n   C    C  n1  n . We get a contradiction.
Therefore
 A, b, c, d 
is a minimal realization.
(iii) controllability + observability
coprimeness
3. Theorem 7.3 (page 191)
All minimal realizations are equivalent.
Proof: (i) All minimal realizations have the same order.
(ii) Choose two minimal realizations,
 A, b, c, d , A, b , c , d  . We know
92
dim  A  dim gˆ s 
d  d , cAmb  c A mb , m  0,1,
1
1
So OC  O C  O O  C C .
1
1
Let P  O O  C C .



OAC  O A C  A  O 1O A CC 1  PAP1
C  O 1OC  PC  b  Pb
O  OCC 1  OP 1  c  cP 1
4. controllability + observability
coprimeness
dim  A  dim gˆ s 
Minimality
For a minimal realization,
 is an eigenvalue of A (controllable canonical form)
Asymptotic stability
 is a pole of ĝ s 
BIBO stability
7.3 Computing coprime fractions
1. How can we obtain a coprime fraction of ĝ s  ?
(i). Find a (probably non-coprime) fraction. Then cancel the common factors between the
numerator and the denumerator.
(ii). Linear algebraic method.
2. Linear algebraic method:
(i) Problem formulation:
gˆ s  
N s 
. Two questions to ask:
Ds 
(a) Is that a coprime fraction?
(b) If not, how can we get one coprime fraction?
(ii) Suppose a coprime fraction is
gˆ s  
N s 
, degD s   degDs 
D s 
Then
N s  N s 

, which yields
D s  D  s 
Ds  N s   N s D s   0
93




Suppose degDs   4 . Then deg D s   4  deg D s   3 .
Ds   D0  D1s  D2 s 2  D3 s 3  D4 s 4 , D4  0
N s   N 0  N1s  N 2 s 2  N 3 s 3  N 4 s 4 ,
D s   D0  D1s  D2 s 2  D3 s 3 ,
N s   N 0  N1s  N 2 s 2  N 3 s 3 .


(iii) Ds   N s   N s D s   0 yields
 D0
D
 1
 D2

D
Sm   3
 D4

0
0

 0
N0
0
0
0
0
0
N1
D0
N0
0
0
0
N2
D1
N1
D0
N0
0
N3
D2
N2
D1
N1
D0
N4
D3
N3
D2
N2
D1
0
D4
N4
D3
N3
D2
0
0
0
D4
N4
D3
0
0
0
0
0
D4
S
0   N 0 
s 0 
 1


0   D0 
s 
s 2 
0    N1 
 3


N 0   D1 
s
 0,  4 



N1   N 2
s 
 5


N 2   D2 
s 
s 6 



N3  N3
 


N 4   D3 
 s 7 
m
S: Sylvester resultant
Proposition: D(s) and N(s) are coprime
S is non-singular, i.e., Sm=0 has the unique solution m=0.
(iv) If S is singular, gˆ s  
gˆ s  
N s 
can be reduced to
Ds 
N s 
, degD s   degDs  , D s , N s  are coprime.
D s 
(a). D-columns: Those formed from Di . They are linearly independent of their left-hand-side
(LHS) columns due to D4  0 .
N-columns: Those formed from N i . If an N-column becomes linearly dependent on its LHS
columns, then all subsequent N-columns are linearly dependent of their LHS columns.
(b) (1) Let  denote the number of linearly independent N-columns in S.
(2) Primary dependent N-column: the first N-column to become linearly dependent on its LHS
columns, which is the
  1 -th N-column.
(3) S1 : the submatrix consisting of the primary dependent N-column and all its LHS columns.
94
S1 has 2   2 columns. But  S1   2  1 . So
S1v  0 has non-zero solutions. v  v1 v2  v2   2 
T
We must have v2   2  0 because the first 2   1 columns of S1 are linearly independent.
Sometime, we normalize v as
v
v
v2   2
. Then the solution v is unique.
 v1    N 0 

 v  
 2   D0 
 v3    N1 


 
D s   D0  D1s    D s  ,
v4   D1 


By
, we get
     
N s   N 0  N1s    N  s 3 .


 
     
 v   N 


 2  1  
v
D

 2   2    
D  v2  2  1  0 , D s  is monic. The solution of D s , N s  is unique.
So
N s 
is proper. Moreover, D s , N s  is coprime and the minimal fraction.
D s 
Example: gˆ s  
s 2 1
, Ds   4s 3  4, N s   s 2  1
3
4 s 1


0
0
0
 4  1 0
0
0  4 1 0
0

0
1
0
0  4 1

S 4
0
0
1
0
0
0
0
4
0
0
1

0
0
0
4
0
0
0
0
0
0
0
0

|
|

primary dependent
0
0 
0
0

 4  1
0
0

0
1
4
0 
By computing rank of S(:,1:m).
|
N - column
0
0
  2 , S1  S :,1: 2  2  S :,1: 6
S1v  0  v   0.1414 0.5657  0.1414 0.5657 0 0.5657
T
- N0
D0
- N1
D1
95
- N2
D2
v
T
  0.25 1  0.25 1 0 1
v6
N s   0.25  0.25s, D s   1  s  s 2
gˆ s  
0.25  0.25s
1 s  s2
Theorem 7.4 (page 195) is a summary of the above procedure.
7.3.1 QR decomposition (more robust way to determine  )
An n m matrix M. There exists an n n orthogonal matrix Q such that
QM  R
R is an upper triangular matrix.
For our example gˆ s  
s 2 1
, Ds   4s 3  4, N s   s 2  1 ,
3
4 s 1


QS  r
*
*
*
*
5.6569
 0
 1.2247
*
*
*

 0
1
5.6569
*
*

r 0
0
0
 0.9129
*
 0
0
0
0
4.3818

0
0
0
0
 0
 0
0
0
0
0

*
*
*
*
0
0
0
*
*
*
*

*
*
*
*

1.633 0
4
0
*
*

primary dependent N - column
7.4 Balanced realization
1. There are infinitely many minimal realizations (which are all equivalent). Which one should we
choose? Some criteria:
(1) Easy to implement. If there are many zeros in A, b or c, many components can be saved.
Controllable and observable forms are good at this point.
(2) Robust against parameter variation
(i) Controllable and observable forms are all sensitive to parameter variation.
(ii) Jordan form is less sensitive to parameter variation.
(iii) The following balanced realization is less sensitive. Moreover, it is useful during reducing the
order of a controller.
2. Balanced realization
96
 x  Ax  bu
, controllable & observable. A is stable.
 y  cx
Model: 
(i)

(ii)
Controllability

Gramian
Wc   e At BB T e A t dt
T
0
,
observability
Gramian
Wo   e A t C T Ce At dt satisfy the following algebriac equations
T
0
AWc  Wc AT  bbT
ATWo  W0 A  cT c
The solutions to the above equations exist and are unique. They are positive definite when (A,b) is
controllable and (c,A) is observable.
(iii) Different minimal realizations of the SAME transfer function may have different
controllability/observability Gramian.

4

1
 x    1    x   u
 

4  2  2  ,   0
Example: 



2

y   1

 


The same gˆ s  
3s  18
s  3s  18
2
0.5 0 
0.5 0 
1 .

,
W

o
2
 0  2 
0  
Different Wc  
0.25 0
, an interesting phenomenon.
1
 0
Note that WcWo  
(iv) Theorem 7.5 (for stable ĝ s  )
For two given minimal realizations,
 A, b, c , A, b , c  ,
WcWo and WcWo are similar and
their eigenvalues are all real and positive.
Proof: Because
 A, b, c  and A, b , c 
are similar, P
 A  PAP 1

 b  Pb
 c  cP 1

(i) Because Wc 


0

e At BB T e A t dt , Wo   e A t C T Ce At dt , we get
T
T
0
 
Wc  PWc PT ,Wo  PT
1
Wo P 1 . So
WcWo  PWcWo P 1 , i.e., WcWo , WcWo are similar.
97
Wc  0 . (c,A) is observable
(ii) (A,b) is controllable
Wo  0 .
There must exist R such that Wc  R R (R is non-singular).
T
sI  WcWo  sI  R T RWo  s n I  R T
 sn I 
RWo
s
RWo T
R  sI  RWo R T
s


RWo RT  0   RWo RT are all real & positive
  WcWo  are all real & positive
(v) Eigenvalues of WcWo are denoted as
12   2 2     n 2
Let
 1



2
 .  i  1,, n are called the Hankel singular values.

i





n


The product WcWo of any minimal realization is similar to

2
.
(vi) For any minimal realization (A,b,c), there exists a transformation x  Px such that
Wc  W   . This is called a balanced realization.
Proof: (1) Wc  R R (R is non-singular).
T
RW o R T and Wc have the same eigenvalues, 1 ,  2 ,, n .
2
2
RWo RT  U  2U T ,U TU  I
U T RWo RTU   2


1
2
U RWo R U 
T
T

1
2
1
  ,

1
2
T
 1 
   2 


1
1
 
 T
1 T
2
2 T

(2) P  R U 
. x  Px

  U R


 
98
2
1
 
 1
 
T
Wo  P 1  Wo P 1    2U T R Wo  RT U  2   

 

1
 
 1

T 
Wc  PWc PT    2U T R 1  RT R  R 1U  2   




(vii) Output-normal realization
U T RWo RTU   2

U T RWo RTU  1  I
1
Let P  R U
T

1
. Then Wo  I ,Wc 

2
(viii) Input-normal realization
Wo   2 ,Wc  I .
(ix) Model reduction based on the balanced realization
 x1   A11 A12   x1   b1 

u
   
 x 2   A21 A22   x2  b2  x

y  c1 c2 


Wc  Wo   1



 2
Statements:
 x1  A11x1  b1u
is balanced.
 y  c1 x1
(1) 
(2) If the diagonal entries of

2
are much less than those of

1
, the transfer function of the
truncated subsystem is close to that of the original system.
Note the above statements are built upon the assumption that A is stable.
7.5 Realizations from Markov parameters
1. Markov parameters (for a strictly proper ĝ s  )
gˆ s  
1s n1   2 s n2     n
 h1s 1  h2s 2  
n
n 1
s  1 s     n
hmm  1,2, are called Markov parameters.
2. Practical method to compute hmm  1,2,
99
gˆ s  
1s n1   2 s n2     n
 h1s 1  h2s 2  
s n  1s n 1     n
 s
  2 s n2     n  s n  1s n1     n h1s 1  h2s 2  
n 1
1
s
n 1
 


h1  s 1h1  h2  s 1h2   2 h1  h3    s   s 1   
 sn
 n 1
s
 s n  2

 
 s0
 nm
s
n2
n 3
0
1  h1
 2  h2  1h1
 3  h3  1h2   2 h1

 n  hn   1hn  1   2 hn  2     n 1h1
0  hm   1hm  1   2 hm  2     n 1hm  n  1   n hm  n 
Where m=n+1,n+2, …
Hankel matrix
 h1 h2
h2 h3
T l , p   
 


 hl  hl  1
h p 
h p  1






 h p  l  1 l p



Theorem 7.7 (page 201). For a strictly proper ĝ s  ,
deggˆ s   n   T n, n   T n  k , n  l   n, k , l  1,2,
Proof: (1) deggˆ s   n
gˆ s  
 s
  s
n i
n
i 1
sn
i
n
i 1
n i
i
(a) When m  n  1 ,
hm  1hm  1   2 hm  2     n1hm  n  1   n hm  n  0
So starting from the (n+1)-th column, all columns are linearly dependent on its left neighbors.
The similar statement works for rows.
Therefore the columns and rows with the indices larger than n have no contribution and
 T n, n   T n  k , n  l  .
(b)
 T n, n  ? Suppose  T n, n  n
Suppose p-th column is the first one to linearly depend on its left-hand-side columns.
 T n, p 1   T n, p

So  1 ,  2 ,  ,  p 1
 such that
100
h p   i 1  i h p  i 
p 1
1st row
l  th row h p  l  1  i 1  i h p  i  l  1, l  1,2,
p 1
Define
 s p 1
 p 2
s
 s p 3

 
 s0


gˆ s  
1  h1
 2  h2  1h1
 3  h3  1h2   2 h1

 p  h p   1h p  1   2 h p  2     p 1h1
 s
  s
p i
p
i 1
sp
i
p
i 1
n i
i
So deggˆ s   p  n . Contradiction!
(2).
 T n, n   T n  k , n  l   n, k , l  1,2,
(n+1)-th column is the first one to linearly depend on its LHS columns. Then we can find an n-th
order fraction of ĝ s  . Moreover, it is impossible to get a lower order fraction of ĝ s  .
Otherwise,
 T n, n  n.
3. Realization based on Hankel matrix
(i) gˆ s   h1s
1
 h2s 2  
If ĝ s  can be realized by (A,b,c) , then
1
gˆ s   csI  A b  cbs 1  cAbs 2  
Therefore, hm  cA
b, m  1,2,
m 1
T n, n   OC
h3
 h2
 h3
h4
~
T n, n   
 


hn  1 hn  2
 hn  1 
 hn  2
 OAC

 

 h2n   nn
(ii). Companion forms:
(1) O  I , c  1 0  0
101

C  T n, n   b
(a) A  T n, n T
~
(b) gˆ s  

n, n 
1
 s
  s
n i
n
i 1
sn
A2b
Ab
 h1 
h2
3

A b , b  T n,1  
  


hn 
i
n
i 1
n i
i
hm  1hm  1   2 hm  2     n1hm  n  1   n hm  n  0, m  n  1,,2n
 n 

 
T n, n  1 2   0 , whose solution exists and is unique.
 
 1 
 1 
 0
 0
A
 

  n
0

  n 1
 0
 0
Verify 
 

  n
 0
 0

 

  n
0 
 0 
.
 

  1 

1
1
0

  n 1
1
0

  n 1
0 
 0  ~
 T n, n T 1 n, n ?
 

  1 

0 
 0 
~
T n, n   T n, n  , the last row is exactly
 

  1 

hm  1hm  1   2 hm  2     n1hm  n  1   n hm  n  0, m  n  1,,2n .
Example: gˆ s  
4s 2  2s  6
 0s 1  2s 2  3s 3  2s 4  2s 5  3.5s 6  
2s 4  2s 3  2s 2  3s  1
For simple ĝ s  , we can use the long division method to get hl 
 T 4,4  3  deggˆ s   3 (the original fraction is NOT coprime).
102
 h1 
~
c  1 0 0, b  h2, A  T 3,3T 1 3,3
 h3
T
T
(2) Balanced forms: (A,b,c) satisfies CC  O O
How to get it?
1
2
1
2 T
SVD of T n, n  KL  K  L
T
1
1
Let O  K2 , C  2 LT , A  O 1T n, nC 1 , b  C :,1, c  O1, :
~
Proof: (i)
7.6 Degree of transfer matrices
Assumption: Every entry of Ĝ s  has a coprime fraction.
1. Definition: Degree of a proper rational transfer matrix Ĝ s 
(1) The least common denominator of all minors of Ĝ s  is the characteristic polynomial of
 
Ĝ s  , d s  . deg Gˆ s   degd s  .
 
(2) McMillan degree or the degree of Ĝ s  is degd s  .  Gˆ s   degd s 
1 
 1

 1 1 1
Example: Gˆ1 s    s  1 s  1  

1
1

 1 1 s  1
 s  1 s  1
1
1
1
1
,
,
,
,0
Minors:
s 1 s 1 s 1 s 1
 
The least common denominator is d s   s  1,  Gˆ s   1 .
 2

Gˆ 2 s    s  1
1

 s 1
Minors:
1 
s  1  2 1 1
1  1 1 s  1

s  1
1
1
1
1
1
,
,
,
,
s  1 s  1 s  1 s  1 s  12
 
2
The least common denominator is d s   s  1 ,  Gˆ s   2 .
Note that every minor must be reduced to a coprime fraction.
2. Proposition: Ĝ s  is realized by (A,B,C)
103
(1) Monic least common denominator of all minors of Ĝ s  =characteristic polynomial of A
(2) Monic least common denominator of all entries of Ĝ s  = minimal polynomial of A.
Proof: Jordan form of A.
7.7 Minimal relization—Matrix case
1. Theorem 7.M2 (page 207)
(A,B,C,D) is a minimal realization
(A,B) is controllable, & (C,A) is observable.
 
 Gˆ s   dim  A
Proof: Similar as the scalar case.
2. All minimal realizations of Ĝ s  are equivalent.
Proof: They should have the same order dim  A . Note that O is not square. But (C,A) is
T
observable implies that O O is non-singular.
3. (1) If we get a controllable and observable realization, it must be minimal (lucky).
(2). Non-observable and/or controllable realization can yield a minimal realization by Kalman
decomposition.
(3). How to get a minimal realization directly from Ĝ s  . Our ongoing task.
7.8 Matrix polynomial fractions
1. Some defintions:
1
(1) Right polynomial fraction: Gˆ s   N s D s 
N s , Ds  are polynomials with the matrix coefficients.
1
(2). Left polynomial fraction: Gˆ s   D s N s 
(3). Right divisor of D(s) and N(s): R(s)
 Ds   Dˆ s Rs 
Dˆ s , Nˆ s  , 
ˆ
 N s   N s Rs 
3. Defintion: Unimodular matrix: M(s) is a square matrix polynomial. If detM s   const ,
M(s) is unimodular.
If M(s) is unimodular, then M
1
s 
is also unimodular ( M
104
1
s  
1
Adj M s  ).
det M s 
3. Greatest common right divisor (grcd) of D(s) and N(s): R(s)
(i) R(s) is a common right divisor of D(s) and N(s).
(ii) R(s) is a left multiple of every common right divisor of D(s) and N(s).
Let R s  be a common right divisor of N(s) and D(s). Then there exists
a matrix polynomial M(s) such that Rs   M s R s 
(iii) If R(s) is unimodular, D(s) and N(s) are right coprime.
Similar definitions work for left coprime concepts.
4. Degree of a transfer matrix Ĝ s 
(1). Ĝ s  is a proper rational matrix polynomial
(2). A right coprime fraction or a left coprime fraction are given as
Gˆ s   N s D 1 s  , Gˆ s   D 1 s N s 
(3) characteristic polynomial of Ĝ s  is defined as
det Ds  or det D s 
(4). Degree of Ĝ s  :
 
det Gˆ s   det Ds   det D s 
5. Equivalent of two definitions of Degree of Ĝ s 
(1) Degree of the least common denominator of all minors.


(2) det Ds , det D s 
 g11 s 



g 22 s 





Outline: Gˆ s   Ls 
 Rs  ,
g rr s 




0




Ls , Rs  are unimodular.
7.8.1 Column and row reducedness
We frequently see D
1
s  .
Is D(s) reversable? When D(s) is column or row reduced, it is
105
reversable.
1. (1) Degree of a polynomial vector: Highest power in all entries of the vector.
(2). Column degree:  ci M s   degree of the i-th column of M(s).
Row degree:  ri M s   degree of the i-th row of M(s).
(3) M(s) is column reduced if degdet M s  

ci
M s 
i
M(s) is row reduced if degdet M s  

ri
M s 
i
 3s 2  2s
2s  1

s 
s  s  3
Example: M s   
2
Column degrees: (2,1)
Row degree: (2,2)
degdetM s   3 .
M(s) is column reduced, but not row reduced.
By pre- or post- multiplying a unimodular matrix, any nonsingular polynomial matrix can be
changed into a column or row reduced polynomial matrix.
2. Decomposition of a polynomial matrix, M(s)

(1).  ci M s   kci . Define H c s   diag s c1 , s
k
kc2

, . Then M(s) can be expressed into
M s   M hc H c s   M lc s 
M hc : column-degree coefficient matrix, constant.
3 2 s 2
M s   

1 1  0
0  2 s 1 


s   s  3 0


(2).  ri M s   k ri . Define H r s   diag s r1 , s r2 , . Then M(s) can be expressed into
k
k
M s   H r s M hr  M lr s 
M hr : row-degree coefficient matrix, constant.
(3). For coprime fractions:
Gˆ s   N s D 1 s  , Gˆ s   D 1 s N s 
If D(s) is column reduced, degdet Ds  
 Ds 
ci
i
If D(s) is row reduced, degdet Ds  
  Ds 
ri
i
106
N s  : q  p, Ds  : p  p . Let D(s) be column reduced.
(4) Theorem 7.8(page 213).
N s D 1 s  is proper (strictly proper) iff
 c Ds    c N s  c Ds    c N s 
i
i
i
i
7.8.2 Compute matrix coprime fraction
1. Indirect (complicated) method:
1
(1) Find a fraction: Gˆ s   N s D s 
 Ds   Dˆ s Rs 
ˆ
 N s   N s Rs 
(2) Compute the gcrd of N(s) and D(s) (complicated): 
ˆ 1 s 
(3) A right coprime fraction is Gˆ s   Nˆ s D
2. Direct method.
1
For a given left fraction Gˆ s   D s N s  , D s  is just required to be reversible. But it is
better to require a row-reduced D s  to guarantee some properties.
We directly compute its right coprime fraction
Gˆ s   N s D 1 s 
(1) Polynomial matrix equation
D s  N s   N s Ds   0
(2) Corresponding linear algebraic equation
Assume the highest order of D s  is 4.
degdet Ds   degdet D s  , the equality holds iff Gˆ s   D 1 s N s  is coprime.
The highest order of Ds  is also 4.
Even D s  and D(s) have the same highest order, it is still possible that
degdet Ds   degdet D s 
D s   D0  D1s  D2 s 2  D3 s 3  D4 s 4 ,
N s   N 0  N1s  N 2 s 2  N 3 s 3  N 4 s 4 ,
Ds   D0  D1s  D2 s 2  D3 s 3  D4 s 4 ,
N s   N 0  N1s  N 2 s 2  N 3 s 3  N 4 s 4
107
 D0

 D1
 D2

 D3
SM   D4

0
0

0
0

(i)
N0
0
0
0
0
0
0
0
N1
D0
N0
0
0
0
0
0
N2
D1
N1
D0
N0
0
0
0
N3
D2
N2
D1
N1
D0
N0
0
N4
D3
N3
D2
N2
D1
N1
D0
0
D4
N4
D3
N3
D2
N2
D1
0
0
0
D4
N4
D3
N3
D2
0
0
0
0
0
D4
N4
D3
0
0
0
0
0
0
0
D4
Every
 N 0 
0 

 D0 
0 
  N1 
0 

  D1 
0
 N 2 
N0 
0
  D2 
N1 
 N 3 
N 2 

  D3 
N3 
 N4 

N 4  
 D4 
D -column is linearly independent of its LHS D -columns and
N -columns.
(ii)
Every N -block column consists of
p
N -columns, which are denoted
N i -column (i=1, …,p).
(a) If one N i -column is linearly dependent on its LHS columns, then all
subsequent N i -columns are also linearly dependent on their LHS columns.
(b) Definition:
i i  1,, p 
is
the
number
of
linear
independent
N i -columns.
 ,  ,,  : column indicies of Ĝs 
1
2
p
The first N i -column that becomes linear dependent on its LHS columns is called
the primary dependent N i -column.
(c). Corresponding to each primary dependent N i -column, we compute the monic
null vector (with 1 as the last entry, which exists and is unique) vi . Put all vi
together, we get a right coprime fraction Gˆ s   N s D
reduced.
108
1
s  .
D(s) is column
 4 s  10

2s  1
Example: Gˆ s   
1

 2 s  1s  2 
3 
s  2   D 1 s N s 
s 1 

s  22 
2 0 5 0  2 0 2 0 0 3
D s   

s  
 s  0 2 s
0 4 0 12 0 9


 20 3  2 6 4 0 2 0 0 3
N s   

s
s 
s
1  1 3 0 2
 2
0 0
 D0

 D1
 D2

 D3
S   D4

0
0

0
0

N0
N1
0
D0
0
N0
0
0
0
0
0
0
0
0
0
0
N2
N3
D1
D2
N1
N2
D0
D1
N0
N1
0
D0
0
N0
0
0
N4
D3
N3
D2
N2
D1
N1
D0
0
0
D4
0
N4
0
D3
D4
N3
N4
D2
D3
N2
N3
D1
D2
0
0
0
0
0
0
0
0
0
0
D4
0
N4
0
D3
D4
0

0
0

0
N 0  , QR=S

N1 
N2 

N3 
N 4 
D2
N1 N 2 D1 D2
N1
N2
D1 D2
N1
N2
 D1
  5.7

 15


 19


2

2


 10

 0.3


R
0, 2  1

2


4

0, 1  2


0







S :,1 : 8v2  0  v2  7  1 1 2  4 0 2 1
T
S :, 1 : 7,9 : 11v1  0  v1  10  0.5 1 0 1 0 2.5  2 0 1
109
T
D1
D2
N1
N2
*
*
*
*
 N0
D0
 N1
D1
 N2
D2
v1
10
v2
7
 0.5  1
1
1
0
2
1
4
0
2.5

2
0
1

1 1 2.5 2 1 0 2
D s   

s  
s
0 2  0 1 0 0
0 ,
7  1  4  2 0 2
2 N s    10
 0.5  1  0 0  s   0 0 s

 
 

1




3. Theorem 7.M4 (page 219)
Gˆ s   D 1 s N s  . By solving SM=0, we get a right coprime fraction Gˆ s   N s D 1 s  .
0 0 1 
By introducing permutation, e.g., P  1 0 0 , we can always guarantee that


0 1 0
1   2     p
(
1
0
ˆ
Dhc  


0
*  *
1  *

  

0  1
is a unit upper triangular matrix. D̂ s  is said to be in column
echelon form.
4. Similar methods to get left coprime fraction.
7.9 Realization from matrix coprime fractions
1. Model: 2*2 strictly proper rational matrix Ĝ s  .
1
(1) Gˆ s   N s D s  (a right coprime fraction)
(2) Assume column degrees of D(s) are
1  4, 2  2 .
110
 s 1 1

 
 1
0


,
L
s



s 2 
 0
 

 0
 s 1
H s   
0
(3) vˆ s   D
1
0 

 
0 

s 2 1 
 

1 
s uˆ s yˆ s   Gˆ s uˆ s   N s D 1 s uˆ s 
 Ds vˆs   uˆ s 

 yˆ s   N s vˆs 
(4). Define the state variable as
x1 t   v1
3 
t , x2 t   v12 t , x3 t   v11 t , x4 t   v1 t , x5 t   v2 1 t , x6 t   v2 t ,
 x 2  x1 , x3  x2 , x 4  x3

x6  x5

(5) Ds   Dhc H s   Dlc Ls 
The column-degree coefficient matrix Dhc is a unit upper triangular matrix.
Dhc H s   Dlc Ls vˆs   uˆ s 
1
1
H s vˆs   Dhc Dlc Ls vˆs   Dhc uˆ s 
Ls vˆs   xˆ s 
1
1
H s vˆs    Dhc Dlc Ls vˆs   Dhc uˆ s 
 sxˆ s 
1 b12 
1
H s vˆs    1  , Dhc  

0 1 
 sxˆ5 s 
112 113 114 121 122 

1
Dhc Dlc   111

 211  212  213  214  221  222 
 x1 
111 112 113 114 121 122 
1 b12 
x
 x    

u
0 1 
 5
 211  212  213  214  221  222 
(6). If Gˆ s   N s D
1
s 
is strictly proper, column degree of N(s) are less than the
corresponding column degrees of D(s).
111
112

N s    111
  211  212
113 114 121 122 
Ls 
 213  214  221  222 
112

yˆ s   N s vˆs    111
  211  212
  111  112
 1
0

1
 0
x  
0
 0
(7).
   211   212

0
 0
112

y   111
  211  212
113 114 121 122 
xˆ s 
 213  214  221  222 
 113
0
 114
0
 121
0
0
1
0
0
0
0
  213   214
0
0
  221
1
 122 
1 b12 

0 0 
0



0 
0 0 
x  
u
0 
0
0


0 1 
  222 



0 
0 0 
113 114 121 122 
x
 213  214  221  222 
Remarks: (1) It is a controllable form. Controllability indicies are
(2). It is observable and controllable

1 , 2 .
N(s) and D(s) are coprime.

(3) deg Gˆ s   deg  A  ob((servable & controllable.
(4) Gˆ s   D
1
s N s   N s D 1 s 
Coprime fractions.
(ii) Characteristic polynomial of Ĝ s 
k1 det Ds   k 2 det Ds   k3 det sI  A
(iii). The set of column degrees of D(s) equals to the set of controllability indicies of (A,B).
(iv). The set of row degrees of D s  equals to the set of observability indicies of (C,A).
7.10 Realization from matrix Markov parameters
1. Gˆ s   H 1s
1
 H 2 s 2  H 3s 3  
H 2 
 H 1
 H 2 
H 3
T r, r   

 

 H r  H r  1
H r  
 H r  1 





 H 2r  1

H 3
 H 2 
 H 3
H 4 
~
T r, r   

 

 H r  1 H r  2 
 H r  1
 H r  2 
, r  n




 H 2r  
112
,
~
T n, n   OC , T n, n   OAC


2. Theorem 7.M7 (page 226). deg Gˆ s   n   T r, r   n r  n 
3. Balanced realization.
Homework: 7.1, 7.6, 7.7, 7.11, 7.12, 7.13, 7.14, 7.16, 7.17.
113
Chapter 9 Pole placement and model matching
9.1 Introduction
1. Problem formulation:
u
r
y
ĝ s 
Controller
Task: To design u (controller) with y and r so that the desired transfer function is achieved.
gˆ 0 s  
yˆ s 
rˆs 
Our task can be decomposed into 2 steps
(i)
Pole placement (which determines stability and effect performance)
(ii)
Model matching (both pole and zero placements), which determines performance.
2. Our options:
(a) open-loop; (b) 1 DOF closed-loop; (c) 2 DOF closed-loop; (d) 2 DOF closed-loop.
114
3. Model requirements
(i) For (A,B,C), B has full column rank and C has full row rank.
(ii) gˆ s  
N s 
, N(s): zero, D(s): pole.
Ds 
Minimum-phase zeros: zeros with negative real parts.
9.1.1.Compensator equations
As Ds   Bs N s   F s , w.r.t. As , Bs 
1. Existence (Theorem 9.1, page 272)
F s , A(s) and B(s) exist
D(s) and N(s) are coprime.
Proof: (i) D(s) and N(s) are coprime
A s , B s ,
A s Ds   B s N s   1 (long division)
Let As   F s A s , Bs   F s B s 
(ii) F s , solutions exists
?
By contradiction. If D(s) and N(s) are NOT prime, their common factor must appear in F(s). Then
F(s) cannot take any polynomial.
Long division in (i)
(a) Ds   R1 s N s   D1 s , degD1 s   degN s 
(b) N s   L1 s D1 s   N1 s , degN1 s   degD1 s 
(c ) D1 s   R2 s N1 s   D2 s , degD2 s   degN1 s 
If D2 s   0 , then R1 s  is a common factor of D(s) and N(s)
If D2 s   1 , then
D1 s   Ds   R1 s N s 
N1 s   N s   L1 s D1 s    s Ds    s N s 
By D1 s   R2 s N1 s   1, we get
Ds   R1 s N s   R2 s  s Ds    s N s   1,
Merging all D(s) and N(s) terms together, we get the expected eqution.
115
2. Uniqueness
(i) Solutions are NOT unique.
Qs , As   F s A s   Qs N s , Bs   F s B s   Qs Ds  are solutions.
(ii)
Space of solutions
Solution= special solution + solution of As Ds   Bs N s   0 .
(iii)
more requirements on solutions? To be considered later.
9.2 Unity-feedback configuration – pole placement
1. Architecture
gˆ s  
N s 
Bs 
, C s  
Ds 
As 
degN s   degDs   n
+
degBs   deg As   m
The overall system has n+m poles. We try to assign them arbitrarily.
gˆ 0 s   p
C s gˆ s 
pBs N s 
pBs N s 


1  C s gˆ s  As Ds   Bs N s 
F s 
F s   As Ds   Bs N s , degF s   degDs   deg As   n  m
In the closed-loop system,
(1) Old zeros ( N(s)=0) are retained.
(2) New zeros (B(s)=0) are added.
(3) Poles may be arbitrarily changed.
2. Pole placement
D s   D0  D1s    Dn s n , Dn  0
N s   N 0  N1s    N n s n ,
As   A0  A1s    Am s m ,
B s   B0  B1s    Bm s m ,
F s   F0  F1s      Fn  m s n  m
A0
B0
A1
B1  Am
Bm Sm  F0
F1  Fn m 
Sm : 2m  1  n  m  1
116
(i) F0
F1  Fnm   Rnm1 , a solution exists iff  Sm   n  m  1 (full column
T
rank).
But
 Sm   2m  1 . So m  n  1 .
(ii) When m  n  1 , Sm : 2n  2n
Sm is nonsingular
D(s) and N(s) are coprime
A solution exists and is unique.
(iii) When m  n  1 and D(s) and N(s) are coprime, then a proper compensator exists to
achieve any desired F(s). Theorem 9.2 (page 275)
Note that
degN s   degDs   N n  0
  Am Dn  Fn m  0  Am  0 .
Am Dn  Bm N n  Fn m

So C s  
B s 
is proper.
As 
9.2.1 Regulation and tracking
Regulation (initial state)
Tracking a step reference input
gˆ 0 s  is BIBO stable.
gˆ 0 s  is BIBO stable & gˆ 0 0  1
p
9.2.2 Robust tracking and disturbance rejection
rˆs   Lr t  
N r s 
N s 
, wˆ s   Lwt   w
Dr s 
Dw s 
Dr s , Dw s  are known. N r s , N w s  may be unknown.
117
F0
B0 N 0
 s  is the least common denominator of unstable poles of Dr s , Dw s  .
(1). gˆ s  
N s 
. N(s) and  s  ave no common poles.
Ds 
r̂s 
B s 
As 
Design B(s) and A(s) to stabilize
1

ĝ s 
ŷ s 
1
N s 
gˆ s  
 s 
 s D s 
As  s Ds   Bs N s   F s stable
yˆ w s   gˆ yw s wˆ s  
yˆ r s   rˆs  
t 
N s  As  s  N w s 
, y w t  0
F s 
Dw s 
t 
As D s  s  N r s 
, yr t   r t  0
F s 
Dr s 
9.3 Implementable transfer function
1. What is the best closed-loop transfer function?
Tracking: gˆ 0 s   1  y t   r t  . Is gˆ 0 s   1 achievable?
Some real constraints:
(i)
All compensators are proper rational.
(ii)
No plant leakage.
(iii)
Every possible input-output pair is proper (well-posed) and BIBO stable (totally stable).
Implementable = (i) + (ii) + (iii)
(a) totally stable : No cancellation of unstable poles.
(b) Well-posed: For a unity feedback system, Cgˆ   1
3. Theorem 9.4 (page 285). gˆ 0 s  is implementable iff gˆ 0 s  and
gˆ s 
are proper and BIBO stable.
tˆs   0
gˆ s 
118
Proof: Necessity: gˆ 0 s  is implementable
(i) gˆ 0 s  is proper and BIBO stable.
 yˆ s 
 rˆs   gˆ 0 s  uˆ s  gˆ s 
(ii) 

 0  tˆs  is proper and BIBO stable.
yˆ s 
rˆs  gˆ s 

 gˆ s 
 uˆ s 
Sufficiency: By the late construction.
Corollary 9.4 (page 285).
3. Realize an implementable gˆ 0 s 
N s 
, degDs   n, degN s   n  1
Ds 
(i)
E s 
gˆ 0 s  
F s 
gˆ s  
r̂s 
L s 
As 
ŷ s 
N s 
Ds 
M s 
As 
N s 
Ls 
Ls N s 
D s 
gˆ 0 s  

As  1  N s M s  As D s   M s N s 
As D s 
deg As   degDs   1  n  1 
Poles of As Ds   M s N s  can be arbitrarily assigned.
gˆ 0 s 
E s 
E s 
E s N s 
Ls N s 


 gˆ 0 s  

N s  F s N s  F s 
F s 
As D s   M s N s 


If deg F s   2n  1 , solutions to F s   As Ds   M s N s  may not exist.


Introduce a new polynomial F̂ s  such at deg F s Fˆ s   2n  1
119
gˆ 0 s  
Ls N s 
E s Fˆ s N s 

As Ds   M s N s 
Fˆ s F s 

Ls   E s Fˆ s 
, which has at least a solution  As , M s 

 As D s   M s N s   Fˆ s F s 


deg  As   deg Fˆ s F s   deg D s   n  1
deg M s   deg  As 
deg E s   deg F s   n, ?




deg Ls   deg E s   deg Fˆ s   deg F s   n  deg Fˆ s 
By Corollary 9.4,
degE s   degN s   degF s   degN s   degDs 
 degE s   degF s   degDs   degF s   n
A solution exist: C1 s  
Ls 
M s 
.
, C2 s  
As 
As 
Example 9.7 (page 289)
(iii) Implementation: (a), (b), (c) can yield possible unstable pole cancellation.
120
(d)
C' s 
x y  Ac x y  bM y
M s  
observable realization



u

1
0

0
x

d
y
As 
y
y
y

x r  Ac xr  bL r
Ls  
observable realization

As  ur  1 0  0xr  d r r

 y
x c  Ac xc   bM bL  


r
C ' s   
u  1 0  0x  d d  y 
c
y
r  

r
uˆ s   
M s 
Ls 
yˆ s  
rˆs  ,
As 
As 
deg As   degDs   1 .
Is the new system state stable?
 xc  Ac xc  Br r  B y y
C ' s   
,
u

C
x

D
r

D
y
Controller:
c
c
r
y

Br  bL , B y  bM , Cc  1 0  0, Dr  d r , D y  d y
 x  Ax  Bu
gˆ s   
 y  Cx
The overall system:
 x   A  BD y C
 x    B C
y
 c 
x
y  C 0 
 xc 
BC c   x   BD r 

r
Ac   xc   Br 
 A  BD yC
 ByC
, A0  
 sI  A  BD y C   BC c 
sI  A0  
 ByC
sI  Ac 

BC c 
, sI  A0  ?
Ac 
 sI  A  BD y C   BC c sI  Ac 1  B y C 
0 


 ByC
sI  Ac 

sI  A0  sI  Ac  sI  A  BD y C  BC c sI  Ac  By C
1
 As  s 
121
 s   sI  A  BD y C  BC c sI  Ac  B y C
1


 sI  A  B D y  Cc sI  Ac  B y C
 sI  A  BC
1
 M s 
As 

M s 
1
 sI  A I  sI  A BC
As  


M s 
1
 s   sI  A I  sI  A BC
As  

 D s   I  sI  A BC
1
M s 
As 

M s  
1
 D s   1  C sI  A B

As  

 N s  M s  
 D s   1 

D s  As  

So
sI  A0  As   s 
 N s  M s  
 As D s   1 

D s  As  

 D s As   N s M s 
We get the desired one.
Homework: 9.1, 9.2, 9.4, 9.6, 9.8, 9.9, 9.10, 9.12, 9.13, 9.15
122
Download