Uploaded by Stephan Gerve

pdfcoffee.com solution-part1ps-pdf-free(1)

advertisement
Solutions Manual
ESSENTIALS OF
ROBUST CONTROL
Kemin Zhou
January 9, 1998
Preface
This solutions manual contains two parts. The rst part contains some intuitive derivations of H2 and H1 control. The derivations given here are not strictly rigorous but I
feel that they are helpful (at least to me) in understanding H2 and H1 control theory.
The second part contains the solutions to problems in the book. Most problems are
solved in detail but not for all problems. It should also be noted that many problems
do not have unique solutions so the manual should only be used as a reference. It is
also possible that there are errors in the manual. I would appreciate very much of your
comments and suggestions.
Kemin Zhou
iii
iv
Contents
Preface
iii
I Understanding H2=LQG=H1 Control
1
1 Understanding H2/LQG Control
1.1 H2 and LQG Problems : : : : : : : : : : : : : : : : : : : : : : : : : : : :
1.2 A New H2 Formulation : : : : : : : : : : : : : : : : : : : : : : : : : : :
1.3 Traditional Stochastic LQG Formulation : : : : : : : : : : : : : : : : : :
3
3
5
8
2 Understanding H1 Control
13
II Solutions Manual
19
2.1 Problem Formulation and Solutions : : : : : : : : : : : : : : : : : : : : :
2.2 An Intuitive Derivation : : : : : : : : : : : : : : : : : : : : : : : : : : :
1
2
3
4
5
6
7
8
9
Introduction
Linear Algebra
Linear Dynamical Systems
H and H1 Spaces
Internal Stability
Performance Speci cations and Limitations
Balanced Model Reduction
Model Uncertainty and Robustness
Linear Fractional Transformation
2
v
13
14
21
23
27
31
37
43
49
53
65
CONTENTS
vi
10 Structured Singular Value
11 Controller Parameterization
12 Riccati Equations
13 H Optimal Control
14 H1 Control
15 Controller Reduction
16 H1 Loop Shaping
17 Gap Metric and -Gap Metric
2
67
77
81
85
91
107
113
125
Part I
1
Understanding H2=LQG=H
Control
1
Chapter 1
Understanding H2/LQG
Control
We present a natural and intuitive approach to the H2 /LQG control theory
from the state feedback and state estimation point of view.
1.1 H2 and LQG Problems
Consider the following dynamical system
x_ = Ax + B1 w + B2 u; x(0) = 0
z = C1 x + D12 u
y = C2 x + D21 w
We shall make the following assumptions:
(i) (A; B2 ) is stabilizable and (C2 ; A) is detectable;
> 0;
D12 > 0 and R2 = D21 D21
(ii) R1 = D12
A j!I
C1
A j!I
C2
B2
(iii)
D12 has full column rank for all !;
B
1
(iv)
D21 has column column rank for all !.
Let Tzw denote the transfer matrix from w to z .
H2 Control Problem: nd a control law
u = K (s)y
3
(1.1)
(1.2)
(1.3)
UNDERSTANDING H2/LQG CONTROL
4
that stabilizes the closed-loop system and minimizes kTzw k2 where
kTzw k :=
2
s
1 Z 1 trace [T (j!)T (j!)] d!
zw
zw
2 1
LQG Control Problem: Assume w(t) is a zero mean, unit variance, white Gaus-
sian noise
E fw(t)g = 0; E fw(t)w ( )g = I(t ):
Find a control law
u = K (s)y
that stabilizes the closed-loop system and minimizes
(
)
Z T
1
2
J = E Tlim
kz k dt
!1 T
0
De ne
C1 ; Ay = A B1 D R 1 C2 ;
Ax = A B2 R1 1 D12
21 2
Then assumptions (iii) and (iv) guarantee that the following algebraic Riccati equations
have respectively stabilizing solutions X2 0 and Y2 0
)C1 = 0
X2 Ax + Ax X2 X2 B2 R1 1 B2 X2 + C1 (I D12 R1 1D12
R 1 D21 )B = 0
Y2 Ay + Ay Y2 Y2 C2 R2 1C2 Y2 + B1 (I D21
1
2
De ne
C1 + B X2 )
F2 := R1 1(D12
2
+ Y2 C )R 1
L2 := (B1 D21
2
2
It is well-known that the H2 and LQG problems are equivalent and the optimal controller
is given by
L
A
+
B
2
2 F2 + L2 C2
K2 (s) :=
F2
0
and
min J = min kTzw k22 = trace (B1 X2 B1 ) + trace (F2 Y2 F2 R1 ) :
1.2. A NEW H FORMULATION
5
2
1.2 A New H2 Formulation
In this section, we shall look at the H2 problem from a time domain point of view,
which will lead to a simple proof of the result. The following lemma gives a time
domain characterization of the H2 norm of a stable transfer matrix.
Lemma 1.1 Consider a stable dynamical system below
x_ = Ax + Bw
z = Cx
Let Tzw be the transfer matrix from w to z . Let x(0) = 0 and w(t) = w0 (t) with a
random direction w0 such that
E fw0 g = 0; E fw0 w0 g = I:
Then
kTzw k = E
2
2
Z
1
0
kz (t)k dt
2
Proof. Let the impulse response of the system be denoted by g(t) = CeAtB. It is
well-known by Parseval's Theorem that
kTzw k =
2
2
where
1
Z
trace [g (t)g(t)] dt = trace(B QB )
0
Q=
1
Z
0
eAt C CeAt dt 0
is the solution of the following Lyapunov equation:
A Q + QA + C C = 0
Next, note that
E
Z
1
0
kz (t)k dt = E
=
0
Z
2
=
Z
z (t) = CeAt Bw0
1
Z
1
0
h
0
1
w0 B eA t C CeAt Bw0 dt
i
trace B eA t C CeAt B E fw0 w0 g dt
h
i
trace B eA t C CeAt B dt = trace (B QB )
This completes the proof.
2
UNDERSTANDING H2/LQG CONTROL
6
In view of the above lemma, the H2 control can be regarded as a problem of nding
a controller K (s) for the system described in equations (1.1){(1.3) with w = w0 (t) and
x(0) = 0 that minimizes
Z 1
J1 := E
kz k2 dt
0
Now we are ready to consider the output feedback H2 problem. We shall need the
following fact.
Lemma 1.2 Suppose K (s) is strictly proper. Then
x(0+ ) = B1 w0 :
Proof. Let K (s) be described by
^ + By;
^
_ = A
^
u = C
Then the closed-loop system becomes
x_ = A B2 C^
x + B1 w =: A x + Bw
^ 21
^ 2 A^
_
BD
BC
and
x(t) = eAt Bw
0
(t)
which gives x(0+ ) = B1 w0 .
2
Suppose that there exists an output feedback controller such that the closed-loop
system is stable. Then x(1) = 0. Note that
J1 = E
1
Z
0
kz (t)k dt
2
1
d
= E
kz k + dt x (t)X2 x(t) dt
0
Z 1 2
kz k + 2x X2 x_ dt
= E
Z
0
= E
= E
= E
= E
Z
0
Z
0
Z
0
Z
0
1
1
1
1
2
kC1 x + D12 uk + 2x X2 (Ax + B1 w + B2 u) dt
2
((u F2 x) R1 (u F2 x) + 2x (t)X2 B1 w(t)) dt
((u F2 x) R1 (u F2 x) + 2x (t)X2 B1 w0 (t)) dt
(u F2 x) R1 (u F2 x)dt + E x (0+ )X2 B1 w0
1.2. A NEW H FORMULATION
7
2
= E
= E
1
Z
0
Z
1
0
(u F2 x) R1 (u F2 x)dt + E fw0 B1 X2 B1 w0 g
(u F2 x) R1 (u F2 x)dt + trace (B1 X2 B1 )
Obviously, an optimal control law would be
u = F2 x
if full state is available for feedback. Since full state is not available for feedback, we
have to implement the control law using estimated state:
u = F2 x^
(1.4)
where x^ is the estimate of x. A standard observer can be constructed from the system
equations (1.1) and (1.3) as
x^_ = Ax^ + B2 u + L(C2 x^ y)
(1.5)
where L is the observer gain to be determined such that A + LC2 is stable and J1 is
minimized.
Let
e := x x^
Then
e_ = (A + LC2 )e + (B1 + LD21)w =: AL e + BL w
u F2 x = F2 e
e(t) = eAL t BL w0
and
J2 := E
= E
=
=
=
=
1
Z
0
Z
0
Z
1
(u F2 x) R1 (u F2 x)dt
eF R1 F2 e dt
2
1
E
w0 BL eALt F2 R1 F2 eALt BL w0 dt
0
Z 1
t
A
t
A
L
L
trace
F2 R1 F2 e BL E fw0 w0 gBLe dt
0
Z 1
t
A
t
A
L
L
trace F2 R1 F2
e BL BLe dt
0
trace fF R1 F2 Y g
2
UNDERSTANDING H2/LQG CONTROL
8
where
or
Y=
Z
0
1
e(A+LC2)t (B1 + LD21 )(B1 + LD21 ) e(A+LC2) t dt
(A + LC2 )Y + Y (A + LC2 ) + (B1 + LD21)(B1 + LD21 ) = 0
Subtract the equation of Y2 from the above equation to get
(A + LC2 )(Y
Y2 ) + (Y Y2 )(A + LC2 ) + (L L2)R2 (L L2 ) = 0
It is then clear that Y Y2 and equality holds if L = L2 . Hence J2 is minimized if
L = L2 .
In summary, the optimal H2 controller can be written as
x^_ = Ax^ + B2 u + L2(C2 x^ y)
u = F2 x^
i.e.,
and
K2 (s) := A + B2 FF2 + L2C2
2
L2
0
q
min kTzw k2 = trace (B1 X2 B1 ) + trace (F2 Y2 F2 R1 ):
This is exactly the H2 controller formula we are familiar with.
1.3 Traditional Stochastic LQG Formulation
The traditional stochastic LQG formulation assumes that w(t) is a zero mean, unit
variance, white Gaussian stochastic process. That is,
E fw(t)g = 0; E fw(t)w ( )g = I(t ):
In this case, we have the following relationship.
Lemma 1.3 Consider a stable dynamical system below
x_ = Ax + Bw; x(0) = 0
z = Cx
Then
(
)
Z T
1
2
kTzw k = E Tlim
kz (t)k dt
!1 T
2
2
0
1.3. TRADITIONAL STOCHASTIC LQG FORMULATION
Proof. Note that
z (t) =
and
Z
0
t
9
CeA(t )Bw( )d
(
)
Z T
1
2
E Tlim
kz (t)k dt
!1 T
0
)
Z T Z tZ t
(t ) 1
A
A
(t s)
= E Tlim
w ( )B e
C Ce
Bw(s) d ds dt
!1 T
(
0
(
1 T
= E Tlim
!1 T
Z
Z
0
0
0
tZ
Z
0
T Z tZ t
0
t
n
o
n
1
trace
B eA(t )C CeA(t
= Tlim
!1 T 0 0 0
Z T Z tZ t
n
1
= Tlim
trace
B eA(t )C CeA(t
!1 T
0
0
0
0
)
trace B eA (t )C CeA(t s) Bw(s)w ( ) d ds dt
0
s) B
o
E fw(s)w ( )g d ds dt
s) B(
o
s) d ds dt
Z TZ t
n
o
1
eA (t s) C CeA(t s) B ds dt
= Tlim
trace
B
!1 T
)
Z TZ t
s 1
A
As
e C Ce ds dt B
= trace B Tlim
!1 T 0 0
= trace (B QB ) = kTzw k2
(
2
2
Now consider the LQG control problem. Suppose that there exists an output feedback controller such that the closed-loop system is stable. Then x(1) = 0.
Lemma 1.4 Suppose that K (s) is a strictly proper stabilizing controller. Then
E fx(t)w (t)g = B1 =2:
Proof. Let K (s) be described by
^ + By;
^
_ = A
^
u = C
Then the closed-loop system becomes
x_ = A B2 C^
x + B1 w =: A x + Bw
^ 21
^ 2 A^
_
BD
BC
Then
x(t) = Z t eA(t )Bw( ) d
(t)
0
UNDERSTANDING H2/LQG CONTROL
10
Hence
E x ((tt))ww ((tt))
= E
=
=
=
t
Z
Z
Z
Z
0
0
t
t
0
which gives E fx(t)w (t)g = B1 =2.
0
t
eA(t )Bw( )w (t) d
eA(t )B E fw( )w (t)g d
eA(t )B (t ) d
eA B ( ) d = B=2
2
Now note that
(
)
Z T
1
2
J := E Tlim
kz (t)k dt
!1 T
0
)
T
d
1
2
kz k + dt x (t)X2 x(t) dt
= E Tlim
!1 T
(
Z
(
Z
(
1
= E Tlim
!1 T
Z
1
= E Tlim
!1 T
Z
0
)
T
1
2
= E Tlim
kz k + 2x X2 x_ dt
!1 T
0
(
T
kC1 x + D12 uk + 2x X2 (Ax + B1 w + B2 u) dt
0
T
0
2
((u F2 x) R1 (u F2 x) + 2x (t)X2 B1 w(t)) dt
)
)
)
T
1
(u F2 x) R1 (u F2 x)dt
= E Tlim
!1 T
(
Z
(
0
)
Z T
1
+E Tlim
trace f2X2 B1 E fw(t)x (t)gg
!1 T
(
1
= E Tlim
!1 T
0
Z
0
T
)
(u F2 x) R1 (u F2 x)dt + trace (B1 X2 B1 )
Again, an optimal control law would be
u = F2 x
if full state is available for feedback. Since full state is not available for feedback, we
will have to implement the control law using estimated state:
u = F2 x^
(1.6)
1.3. TRADITIONAL STOCHASTIC LQG FORMULATION
11
where x^ is the estimate of x. A standard observer can be constructed from the system
equations as
x^_ = Ax^ + B2 u + L(C2 x^ y)
(1.7)
where L is the observer gain to be determined such that A + LC2 is stable and J is
minimized.
Let
e := x x^
Then
e_ = (A + LC2 )e + (B1 + LD21)w =: AL e + BL w
u F2 x = F2 e
and
J3
t
Z
e(t) =
0
eAL(t ) BLw( ) d
)
Z T
1
(u F2 x) R1 (u F2 x)dt
:= E Tlim
!1 T
(
= E
1
Z
0
0
e F2 R1 F2 e dt
)
Z T Z tZ t
(t ) 1
A
A
(
t
s
)
L
w (t)BL e L F2 R1 F2 e
BLw(s) d ds dt
= E Tlim
!1 T
(
(
0
1
= trace Tlim
!1 T
0
Z
T
0
0
tZ t
Z
0
0
F2 R1 F2 eAL (t )BL E fw(s)w ( )gBL eAL t s) d ds dt
(
)
Z T Z tZ t
(t s)
1
A
(t )
A
L
F RFe
BL ( s)BL e L
d ds dt
= trace Tlim
!1 T 0 0 0 2 1 2
(
(
)
Z TZ t
1
= trace F2 R1 F2 Tlim
eALt BLBL eALt ds dt
!1 T
= trace fF2 R1 F2 Y g
where
or
0
Y=
0
Z
0
1
eAL t BL BL eAL t dt
(A + LC2 )Y + Y (A + LC2 ) + (B1 + LD21)(B1 + LD21 ) = 0
Subtract the equation of Y2 from the above equation to get
(A + LC2 )(Y Y2 ) + (Y Y2 )(A + LC2 ) + (L L2)R2 (L L2 ) = 0
It is then clear that Y Y2 and equality holds if L = L2 . Hence J3 is minimized if
L = L2 .
)
12
UNDERSTANDING H2/LQG CONTROL
Chapter 2
Understanding H1 Control
We give an intuitive derivation of H1 controller.
2.1 Problem Formulation and Solutions
Consider again the standard system below:
z
y
G
-
K
w
u
2
3
A B1 B2
G(s) = 4 C1 0 D12 5 :
C2 D21 0
The following assumptions are made:
(i) (A; B1 ) is controllable and (C1 ; A) is observable;
(ii) (A; B2 ) is stabilizable and (C2 ; A) is detectable;
C1 D12 = 0 I ;
(iii) D12
(iv)
B1
0
D21 D21 = I .
Theorem 2.1 There exists an admissible controller such that kTzw k1 < i the following three conditions hold:
13
UNDERSTANDING H1 CONTROL
14
(i) There exists a stabilizing solution X1 > 0 to
X1 A + A X1 + X1 (B1 B1 =
2
B2 B2 )X1 + C1 C1 = 0:
(2.1)
C2 C2 )Y1 + B1 B1 = 0:
(2.2)
(ii) There exists a stabilizing solution Y1 > 0 to
AY1 + Y1 A + Y1 (C1 C1 =
2
(iii) (X1 Y1 ) < 2 :
Moreover, when these conditions hold, one central controller is
^
Ksub(s) := AF 1 Z10 L1
1
where
A^1 := A + 2B1 B1 X1 + B2 F1 + Z1 L1 C2
2
F1 := B2 X1 ; L1 := Y1 C2 ; Z1 := (I
Y1 X1 ) 1 :
2.2 An Intuitive Derivation
Most existing derivations and proofs of the H1 control results given in Theorem 2.1
are mathematically quite complex. Some algebraic derivations (such as the one given
in the book) are simple but they provide no insight to the theory for control engineers.
Here we shall present an intuitive but nonrigorous derivation of the H1 results by using
only some basic system theoretic concept such as state feedback and state estimation.
In fact, we shall construct intuitively the output feedback H1 central controller by
combining an H1 state feedback and an observer.
A key fact we shall use is the so-called bounded real lemma, which states that, for a
system z = G(s)w with state space realization G(s) = C (sI A) 1 B 2 H1 , kGk1 < ,
which is essentially equivalent to
Z
0
1
kz k
2
2
kwk dt < 0; 8w 6= 0
2
if and only if there is an X = X 0 such that
XA + A X + XBB X= 2 + C C = 0
and A + BB X= 2 is stable. Dually, there is a Y = Y 0 such that
Y A + AY + Y C CY= 2 + BB = 0
and A + Y C C= 2 is stable.
2.2. AN INTUITIVE DERIVATION
15
Note that the system has the following state space realization
x_ = Ax + B1 w + B2 u; z = C1 x + D12 u; y = C2 x + D21 w
We shall rst consider state feedback u = Fx. Then the closed-loop system becomes
x_ = (A + B2 F )x + B1 w; z = (C1 + D12 F )x
By the bounded real lemma, kTzw k1 < implies that there exists an X = X 0 such
that
X (A + B2 F ) + (A + B2 F ) X + XB1 B1 X= 2 + (C1 + D12 F ) (C1 + D12 F ) = 0
which is equivalent, by completing square with respect to F , to
XA + A X + XB1 B1 X= 2 XB2 B2 X + C1 C1 + (F + B2 X ) (F + B2 X ) = 0
Intuition suggests that we can take
F = B2 X
which gives
XA + A X + XB1B1 X= 2 XB2B2 X + C1 C1 = 0
This is exactly the X1 Riccati equation under the preceding simpli ed conditions.
Hence, we can take F = F1 and X = X1 .
Next, suppose that there is an output feedback stabilizing controller such that
kTzw k1 < . Then x(1) = 0 because the closed-loop system is stable. Consequently,
we have
Z
0
1
kz k
2
2
1
Z
kwk dt =
2
1
Z
0
kz k
2
2
d
kwk + dt (x X1 x) dt
2
kz k
kwk + x_ X1 x + x X1 x_ dt
Substituting x_ = Ax + B w + B u and z = C x + D u into the above integral and
using the X1 equation, and nally completing the squares with respect to u and w, we
=
2
0
1
get
1
Z
0
2
2
kz k
2
2
2
1
kwk dt =
2
Z
0
where v = u + B2 X1 x = u F1 x and r = w
12
1
kvk
2
2
system equations, we have the new system equations
2
krk dt
2
B1 X1 x. Substituting w into the
x_ = (A + B1 B1 X1 = 2)x + B1 r + B2 u
v = F1 x + u
y = C2 x + D21 r
UNDERSTANDING H1 CONTROL
16
Hence the original H1 control problem is equivalent to nding a controller so that
kTvr k1 < or
1
Z
0
ku F1 xk
2
2
krk dt < 0
2
Obviously, this also suggests intuitively that the state feedback control can be u = F1 x
and a worst state feedback disturbance would be w = 2 B1 X1 x. Since full state is
not available for feedback, we have to implement the control law using estimated state:
u = F1 x^
where x^ is the estimate of x. A standard observer can be constructed from the new
system equations as
x^_ = (A + B1 B1 X1 = 2)^x + B2 u + L(C2 x^ y)
where L is the observer gain to be determined. Let e := x x^. Then
e_ = (A + B1 B1 X1 = 2 + LC2 )e + (B1 + LD21 )r
v = F1 e
Since it is assumed that kTvr k1 < , it follows from the dual version of the bounded
real lemma that there exists a Y 0 such that
Y (A + B1 B1 X1 = 2 + LC2 ) + (A + B1 B1 X1 = 2 + LC2 )Y + Y F1 F1 Y= 2
+(B1 + LD21 )(B1 + LD21 ) = 0
The above equation can be written as
(A + B1 B1 X1 = 2) + (A + B1 B1 X1 = 2)Y + Y F1 F1 Y= 2 + B1 B1 Y C2 C2 Y
+(L + Y C2 )(L + Y C2 ) = 0
Again, intuition suggests that we can take
L = Y C2
which gives
Y (A + B1 B1 X1 = 2) + (A + B1 B1 X1 = 2)Y + Y F1 F1 Y=
It is easy to verify that
Y = Y1 (I
2
Y C2 C2 Y + B1 B1 = 0
X1 Y1 ) 1
where Y1 is as given in Theorem 2.1. Since Y 0, we must have
(X1 Y1 ) < 2
2
2.2. AN INTUITIVE DERIVATION
17
Hence L = Z1 L1 and the controller is give by
x^_ = (A + B1 B1 X1 = 2 )^x + B2 u + Z1 L1 (C2 x^ y)
u = F1 x^
which is exactly the H1 central controller given in Theorem 2.1.
We can see that the H1 central controller can be obtained by connecting a state
feedback with a state estimate under the worst state feedback disturbance.
18
UNDERSTANDING H1 CONTROL
Part II
Solutions Manual
19
Download