Lecture 4 and 5 Controllability and Observability: Kalman

advertisement
1
Lecture 4 and 5
Controllability and Observability: Kalman decompositions
Spring 2013 - EE 194, Advanced Control (Prof. Khan)
January 30 (Wed.) and Feb. 04 (Mon.), 2013
I. O BSERVABILITY OF DT LTI SYSTEMS
Consider the DT LTI dynamics again with the observation model. We are interested in deriving
the necessary conditions for optimal estimation and thus the additional noise terms can be
removed. The system to be considered is the following:
xk+1 = Axk + Buk ,
yk = Cxk ,
(1)
(2)
as before we consider the worst case of one observation per k, i.e., p = 1.
Definition 1 (Observability). A DT LTI system is said to be observable in n time-steps when
the initial condition, x0 , can be recovered from a sequence of observations, y0 , . . . , yn−1 , and
inputs, u0 , . . . , un−1 .
The observability problem is to find x0 from a sequence of observations, y0 , . . . , yk−1 , and
The lecture material is largely based on: Fundamentals of Linear State Space Systems, John S. Bay, McGraw-Hill Series in
Electrical Engineering, 1998.
February 6, 2013
DRAFT
2
inputs, u0 , . . . , uk−1 . To this end, note that
y0 = Cx0 ,
y1 = Cx1 ,
= CAx0 + CBu0 ,
y2 = Cx2 ,
= CA2 x0 + CABu0 + CBu1 ,
..
.
yk−1 = CAk−1 x0 + CAk−2 Bu0 + . . . + CBuk−2 .
In matrix form, the above is given by

 
C
y0

 
 y   CA
 1  

 
 y2  =  CA2

 
 .  
..
.
 .  
.

 
yk−1
CAk−1


u0


 u

1




 x0 + Υ  u2


 .

 ..



uk−1





.




The above is a linear system of equations compactly written as
Ψ0:k−1 = O0:k−1 x0 ,
| {z }
(3)
∈Rk×n
and hence, from the known inputs and known observations, the initial condition, x0 , can be
recovered if and only if
rank(O0:k−1 ) = n.
(4)
Using the same arguments from controllability, we note that rank(O) < n for k ≤ n − 2 as the
observability matrix, O, has at most n − 1 rows. Similarly, adding another observation after the
nth observation does not help by Cayley-Hamilton arguments as
rank(O0:n−1 ) = rank(O0:n ) = rank(O0:n+1 ) = . . . .
February 6, 2013
(5)
DRAFT
3
Hence, the observability matrix is defined as

C

 CA


2
O=
 CA

..

.

CAn−1





,




(6)
without subscripts.
Remark 1. In literature, rank(C) = n is often referred to as the system being (A, B)–controllable.
Similarly, rank(O) = n is often referred to as the system being (A, C)–observable.
Remark 2. Show that
rank(C) = n ⇔ CC T 0 ⇔ (CC T )−1 exists,
(7)
rank(O) = n ⇔ OT O 0 ⇔ (OT O)−1 exists.
(8)
(A, C)–observable ⇔ (AT , C T )–controllable,
(9)
Remark 3. Show that
(A, B)–controllable ⇔ (AT , B T )–observable.
February 6, 2013
(10)
DRAFT
4
II. K ALMAN DECOMPOSITIONS
Suppose it is known that a SISO (single-input single-output) system has rank(C) = nc < n,
and rank(O) = no < n. In other words, the SISO system, in question, is neither controllable nor
observable. We are interested in a transformation that rearranges the system modes (eigenvalues)
into:
•
modes that are both controllable and observable;
•
modes that are neither controllable nor observable;
•
modes that are controllable but not observable; and
•
modes that are observable but not controllable.
Such a transformation is called Kalman decomposition.
A. Decomposing controllable modes
Firstly, consider separating only the controllable subspace from the rest of the state-space. To
this end, consider a state transformation matrix, V , whose first nc columns are a maximal set of
linearly independent columns from the controllability matrix, C. The remaining n − nc columns
are chosen to be linearly independent from the first nc columns (and linearly independent among
themselves). The transformation operator (matrix) is n × n, defined as
h
i h
i
V = v1 . . . vnc | vnc +1 . . . vn , V1 V2 .
(11)
It is straightforward to show that the transformed system (xk = V xk ) is
xk+1 = V −1 AV xk + V −1 Bu,
yk = CV xk .
(12)
(13)
Partition V −1 as

V −1 = 
V1
V2

,
where V 1 is nc × n and V 2 is (n − nc ) × n. Now note that



 

h
i
V1
V V V 1 V2
I
 V1 V2 =  1 1
 =  nc
.
In = V −1 V = 
V2
V 2 V1 V 2 V2
In−nc
February 6, 2013
(14)
(15)
DRAFT
5
The above leads to the following1 :
V 2 V1 = 0 ⇒ C lies in the null space of V 2 ,
(16)
V 2 B = 0, and V 2 AV1 = 0.
(17)
further leading to
The transformed system in Eq. (12) is thus given by




 
h
i
V
V
AV
V
AV
A
A
1
1
2
cc
 , (18)
 A V1 V2 =  1 1
, c
A , V −1 AV = 
V2
V 2 AV1 V 2 AV2
0 Ac


 


B
V1
V B
B =  1  ,  c .
B , V −1 B = 
(19)
V2
V 2B
0
Now recall that a similarity transformation does not change the rank of the controllability matrix,
C, of the transformed system. Mathematically,
h
i
−1
−1
−1
−1
n−1
−1
rank(C) = rank
,
V B (V AV )V B . . . (V AV ) V B
h
i
= rank
,
V −1 B V −1 AB . . . V −1 An−1 B
h
i
= rank V −1 B AB . . . An−1 B
,
h
i
= rank
,
B AB . . . An−1 B
= rank (C) ,
= nc .
Furthermore,
i
nc −1
n−1
,
B AB . . . A
B . . . A B


Bc Ac Bc . . . Acnc −1 Bc Acn−1 Bc
. . .
 ,
= rank 
0
0
...
0
0


Bc Ac Bc . . . Acnc −1 Bc


 ,
= rank
0
0
...
0
= nc ,
rank(C) = rank
1
h
This is because all of the freedom in the linear independence of C is already captured in V1 .
February 6, 2013
DRAFT
6
where to go from second equation to the third, we use Cayley-Hamilton theorem. All of this to
show that the subsystem (Ac , Bc ) is controllable.
Conclusions: Using the transformation matrix, V , defined in Eq. (11), we have transformed
the original system, xk+1 = Axk + Buk into the following system:
xk+1 = Axk + Buk ,



 

xc (k + 1)
x (k)
A Acc
B
 =  c
 c
 +  c  uk ,
,
xc (k + 1)
0 Ac
xc (k)
0
(20)

(21)
such that the transformed state-space is separated into a controllable subspace and an uncontrollable
subspace.
Consider the transformed system in Eq. (20) and partition the (transformed) state-vector, x(k),
into an nc × 1 component, xc (k), and an n − nc × 1, xc (k). We get
xc (k + 1) = Ac xc (k) + Acc xc (k) + Bc uk ,
(22)
xc (k + 1) = Ac xc (k).
(23)
Clearly, the lower subsystem, xc (k), is not controllable. In the transformed coordinate space, xk ,
we can only steer a portion, xc (k), of the transformed states. The subspace of Rn that can be
steered is denoted as R(C), defined as the row space of C, where R(C) is called the controllable
subspace.
Finally, recall that the eigenvalues of similar matrices, A and V −1 AV , are the same. Recall
the structure of V −1 AV from Eq. (18); since V −1 AV is block upper-triangular, the eigenvalues
of A are given by the eigenvalues of Ac and Ac , called the controllable eigenvalues (modes)
and uncontrollable eigenvalues (modes), respectively.
Example 1. Consider the CT LTI system:

1
1
 2

ẋ =  5
3
6

−5 −1 −4


 1 2 −4 


C =  0 5 −5  ,


0 5 −5
February 6, 2013




 1 

 
 x +  0  u,

 
0
DRAFT
7
with rank(C) = 2 < 3. Defines V as

 1 2 0 


=  0 5 0 ,


0 5 1

V
and note that the requirement for choosing V are satisfied. Finally, verify that the transformed
system matrices are


1 

0 ,

0
 0 6

V −1 AV =  1 −1

0 0


 1 
 
V −1 B =  0 .
 
0
Verify that the top-left subsystem is controllable. From this structure, we see that the twodimensional controllable system is grouped into the first two state variables and that the third
state variable has neither an input signal nor a coupling from the other subsystem. It is therefore
not controllable.
B. Decomposing observable modes
Now we can easily follow the same drill and come up with another transformation, xk = W xk ,
where W contains no linearly independent rows of O and the remaining rows are chosen to be
linearly independent of the first no rows (and linearly independent among themselves). Finally,
the transformed system,
xk+1 = Axk + Buk ,




Ao 0
B
 xk +  o  uk ,
= 
Aoo Ao
Bo
h
i
yk =
C o 0 xk
(24)
(25)
is separated into an observable subspace and an unobservable subspace. In fact, we can now
write this result as a theorem.
Theorem 1. Consider the DT LTI system,
xk+1 = Axk ,
yk = Cxk ,
February 6, 2013
DRAFT
8
where A ∈ Rn×n and C ∈ Rp×n . Let O be the observability matrix as defined in Eq. (6). If
rank(O) = no < n, there there exists a non-singular matrix, W ∈ Rn×n such that


h
i
Ao 0
,
W −1 AW = 
CW = Co 0 ,
Aoo Ao
where Ao ∈ Rno ×no , Co ∈ Rno ×n , and the pair (Ao , Co ) is observable.
By carefully applying a sequence of such controllability and observability transformations,
the complete Kalman decomposition results



Aco 0
xco (k + 1)



 A
 x (k + 1) 

 21 Aco
 co
 = 

 0
 xco (k + 1) 
0



xco (k + 1)
0
0
y(k) =
February 6, 2013
h
Cco
in a system of the following

 
A13 0
xco (k)

 

 
A23 A24 
  xco (k)  

+
 xco (k)  
Aco 0 

 
A43 Aco
xco (k)


xco (k)


i  xco (k) 


.
0 Cco 0 
 xco (k) 


xco (k)
form:

Bco

Bco 

 u(k),
0 

0
(26)
(27)
DRAFT
9
C. Kalman decomposition theorem
Theorem 2. Given a DT (or CT) LTI system that is neither controllable nor observable, there
e, such that the system dynamics can be transformed
exists a non-singular transformation, x = T x
into the structure given by Eqs. (26)–(27).
Proof: The proof is left as an exercise and constitutes Homework 1.
February 6, 2013
DRAFT
10
A PPENDIX A
S IMILARITY TRANSFORMS
Consider any invertible matrix, V ∈ Rn×n , and a state transformation, xk = V zk . The
transformed DT-LTI system is given by
xk+1 = Axk + Buk ,
⇒ V zk+1 = AV zk + Buk ,
−1
−1
⇒ zk+1 = V
| {zB} uk ,
| {zAV} zk + V
,A
,B
yk = |{z}
CV zk .
,C
The new (transformed) system is now defined by the system matrices, A, B, and C. The two
systems, xk and zk , are called similar systems.
Remark 4. Each of the following can be proved.
•
Given any DT-LTI systems, there are infinite possible transformations.
•
Similar systems have the same eigenvalues; the eigenvectors of the corresponding system
matrices are different.
•
Similar systems have the same transfer functions; in other words, the transfer function of
the DT-LTI dynamics is unique with infinite possible state-space realizations.
•
The rank of the controllability and observability are invariant to a similar transformation.
A PPENDIX B
T RIVIA : N ULL SPACES
Consider a set of m vectors, xi ∈ Rn , i = 1, . . . , m.
Definition 2 (Span). The span of m vectors, xi ∈ Rn , i = 1, . . . , m is defined as the set of all
vectors that can be written as a linear combination of {xi }0 s, i.e.,
( )
m
X
span(x1 , . . . , xm ) = y y =
αi xi ,
(28)
i=1
for αi ∈ R.
February 6, 2013
DRAFT
11
Definition 3. The null space, N (A) ⊆ Rn , of an n × n matrix, A, is defined as the set of all
vectors such that
Ay = 0,
(29)
for y ∈ N (A).
Consider a real-valued matrix, A ∈ Rn×n and let x1 6= 0 and x2 6= 0 be the maximal set of
linearly independent vectors2 such that
Ax1 = 0,
Ax2 = 0,
2
X
2
X
(30)
which leads to
A
i=1
αi xi =
αi Axi = 0.
(31)
i=1
Following the definition of span and null space, we can say that span(x1 , x2 ) lies in the null
space of A, i.e.,
N (A) = span(x1 , x2 ).
(32)
In this particular case, the dimension of N (A) is 2 and3 rank(A) = n − 2.
2
In other words, there does not exist any other vector, x3 , that is linearly independent of x1 and x2 such that Ax3 = 0.
3
This further leads to an alternate definition of rank as n − nc , where nc , dim(N (A)).
February 6, 2013
DRAFT
12
Lets recall the uncontrollable case with rank(C) = nc . Clearly, C has nc linearly independent
(l.i.) columns (vectors in Rn ); let the matrix V1 ∈ Rn×nc consists of these nc l.i. columns. On
the other hand, let the matrix Ve1 ∈ Rn×n−nc consists of the remaining columns of C and note
that
every column of Ve1 ∈ span(columns of V1 ),
or,
span(columns of V1 ) contains every column of Ve1 .
The above leads to the following:
Remark 5. If there exists a matrix, V 2 , such that V 2 V1 = 0, then
every column of V1 ∈ N (V 2 ),
⇒ span(columns of V1 ) ⊆ N (V 2 ),
⇒ every column of Ve1 ∈ N (V 2 ).
February 6, 2013
DRAFT
13
A. Kalman decomposition, revisited
Construct a transformation matrix, V ∈ Rn×n , whose whose first n × nc block is V1 . Choose
the remaining n − nc columns to be linearly independent from the first nc columns (and linearly
independent among themselves); call the second n × (n − nc ) block as V2 :


h
i
V1
,
V , V1 V2 ,
V −1 , 
V2

In = V −1 V = 
V1
V2


h

V1 V2
i
=
V 1 V1 V 1 V2
V 2 V1 V 2 V2


=
Inc
0
0
In−nc

.
From the above, note that V 2 V1 = 0. Now consider the transformed system matrices, A and B:




h
i
V1
V AV V 1 AV2
 A V1 V2 =  1 1
,
A , V −1 AV = 
V2
V 2 AV1 V 2 AV2




V1
V B
B =  1 .
B , V −1 B = 
V2
V 2B
Note that
AV1 ∈ span(columns of V1 ) ⊆ N (V2 ),
B ∈ N (V 2 ).
February 6, 2013
DRAFT
Download