Robust and Adaptive Control with Aerospace

advertisement
SPRINGER ADAVANCED TEXTBOOKS IN CONTROL AND SIGNAL PROCESSING
Robust and Adaptive Control with
Aerospace Applications
The Solutions Manual
Eugene Lavretsky, Ph.D., Kevin A. Wise, Ph.D.
2/9/2014
The solutions manual covers all theoretical problems from the textbook
and discusses several simulation-oriented exercises.
2
Robust and Adaptive Control
Chapter 1
Exercise 1.1. Detailed derivations of the aircraft dynamic modes can be found in
many standard flight dynamics textbooks, such as [1]. Further insights can be obtained through approximations of these modes and their dynamics using time-scale
separation properties of the equations that govern fight dynamics [3].
The time-scale separation concept will now be illustrated. Set control inputs to zero and change order of the states in (1.7) to: x  
q vT
  . Then the longiT
tudinal dynamics can be partitioned into two subsystems:







 Z
Zq

1
V0
  V0
q  
Mq
   M
vT  
X 0
  

0 1

ZV
V0
  
MV
XV
0
g sin  0 

V0


0


 g cos  0  

0


 

q 

vT 
 
or equivalently,
 x1   A11
 A
 x2   21
A12   x1 

A2 2   x2 
For most aircraft, their speeds are much greater than the vertical force and moment
Z
g sin  0
0
sensitivity derivatives due to speed, and so: V  0, M V  0 . Also
V0
V0
and so: A12  022 . These assumptions are at the core of the time-scale separation
principle: The aircraft fast dynamics (the short-period) is almost independent and
decoupled of the vehicle slow dynamics (the phugoid). With A12  022 , the shortperiod dynamics are:
 Z
  
    V0
 q  M
 
Zq 
  
V0   
q
M q   
1
The short-period characteristic equation is:
 
Z
   
V0
det 
  M

 
 Zq
 1 
 V0
  Mq


 2



 Zq 
Z  Z

M q  M  1    0 

   0       M q 
V0  V0



 V0 




Comparing this to the second order polynomial,  2  2 sp sp  sp2  0 , gives
the short-period natural frequency and the damping ratio.
 Z 
sp    M   1  q  ,  sp 
 V0 
For open-loop stable aircraft,
Zq
V0

Z 
 Mq   
V0 

 Zq 
2   M   1 

 V0 
 0, M   0, M q 
Z
 0 , which allows for
V0
further simplification of the vehicle short-period modes.
Mq 
sp 
M  ,  sp 
Z
V0
2 M
Approximations for the phugoid dynamics can be derived by setting the shortperiod dynamics to zero and computing the remaining modes. This is often referred to as the “residualization of fast dynamics”.
 0   A11
   
 x2   A21
1

A12   x1    x1   A11 A12 x2




 

1
A2 2   x2    x2   A2 2  A21 A11 A12  x2 


The phugoid dynamics approximation takes the form x2   A2 2  A21 A111 A12  x2 ,
or equivalently

 vT    X V
   
    0


 g cos  0   X 

0
  0
Z
0 
 V0
1  
 M
Zq 

V0 
M q 
1
1
 ZV
V
 0
M
 V

g sin  0
V0
0

   vT 
  
   

4
Robust and Adaptive Control
Explicit derivations for the phugoid mode natural frequency and damping ratio
can be found in [3].
Exercise 1.3. Similar to Exercise 1.1 approach, based on the time-scale separation
principle, can also be applied to the lateral-directional dynamics (1.29) to show
that the dynamics (1.30) represent the “fast” motion and are comprised of the vehicle roll subsidence and the dutch-roll modes. The slow mode (called the “spiral”) has the root at the origin. The reader is referred to [1], [2], and [3] for details.
Chapter 2
Exercise 2.1. Same as Example 2.1.
Exercise 2.2. Scalar system is controllable. Use (2.38).
A   1; B  1; Q  1; R  1; QT  1
The Riccati equation is  P  PA  AT P  Q  PBR1BT P with boundary condition P 1  1 .

dP
 P 2  2P  1,
dt
From integral table P  t  

P 1 1

Pt 
dP
dP 
P  2P  1
 
2 1
1
2 2 2
e
2 2
2 1 
 22 
2 2
e
2
2  t 1
u*  R1 BT P t  x  P t  x
The closed loop block diagram is
T 1
2
 dt  1  t
t
2  t 1
Exercise 2.3. For this problem we use the algebraic Riccati equation (2.45) to
solve for the constant feedback gain matrix. From the problem set up
0 1 
0
1 0 
A
; B    ;Q  

 ; R  1
1 0 
1 
0 0 
0 1
.
AB  

1 0 RK 2
Check observability of the unstable modes through the penalty matrix (2.53) and
(2.54) for the system. Factor Q into square roots (2.53): Check controllability
Check controllability (2.52) for the system: Pc   B
1 
1 0
(2.52) for the system: Q  Q1T Q1    1 0  
 . Using (2.54), gives
0
0 0
 Q1  1 0
. Substituting the problem data into the ARE (2.45), yields:
Q A  0 1
 RK  2
 1  
 l m
P
 , and therefore:
m n 
0  PA  AT P  Q  PBR 1 B T P
 l m   0 1   0 1   l m   1 0  l m   0
 l m
0



10 1 












 m n   1 0  1 0  m n   0 0  m n   1
m n 

2m  m 2  1  0; m  1  2, l  n  mn  0, 2m  n 2  0; n  2 1  2
 l m
u   R 1 B T Px   0 1 
x   m n  x   1  2


m n 



2 1 2  x

Exercise 2.4. Use Matlab.
Exercise 2.5. For this problem we use the algebraic Riccati equation (2.45) to
solve for the constant feedback gain matrix. From the problem set up
0 1
0
 4 0
A 
 ; B  1 ; Q   0 0 ; R  1
0
0


 


.
Check controllability (2.52) for the system: Pc   B
0 1
.
AB  

1 0 RK 2
6
Robust and Adaptive Control
Check observability of the unstable modes through the penalty matrix (2.53) and
(2.54) for the system. Factor Q into square roots (2.53): Check controllability
2
 4 0
(2.52) for the system: Q  Q1T Q1     2 0  
.
0
 
 0 0
 Q  2 0
Using (2.54):  1   
.

Q1 A  0 2  RK  2
0  PA  AT P  Q  PBR 1B T P
 l m  0
0

 m n  0
0 l  0
0

0 m   l
1   0 0   l m   4 0  l m   0
 l m



10 1 










0 1 0  m n   0 0  m n   1
m n 
0   4 0  l m   0 0  l m 


m   0 0  m n   0 1  m n 
0 l  0 0   4 0  m 2 mn 
0



2 
0 m   l m   0 0  mn n 
m 2  4; m  2
l  mn  0
2m  n 2  0; n  2
 l m
u   R 1 B T Px   0 1 
 x   m n  x   2 2 x
m n 
K lqr  k1 k2    2 2
The closed loop system matrix is
0 1
Acl  A  BK lqr  
 ;   Acl    1  j  1  j 
 2 2 



Exercise 2.6.
J


1
q xT Q x   u T R u dt . We answer this problem using
2 0
(2.85). When well posed the LQR problem guarantees a stable closed loop system.
When q  0 the overall penalty on the states goes to zero. From (2.85)
cl  s cl  s   ol  s  ol  s  , so the closed loop poles will be the stable zeros of
ol  s  ol  s  , where ol  s   det  sI nx  A is the open loop characteristic polynomial. If any open loop poles in ol  s  are unstable, they are stable in ol  s  .
When   0 the overall penalty on the control goes to zero. This creates a high
gain situation. From (2.89), some of the roots will go to infinity along asymptotes.
Those that stay finite will approach the stable transmission zeros of the transfer
function matrix H 1 . These finite zeros shaped by the Q matrix control the dynamic response of the optimal regulator.
Exercise 2.7. 1) The HJ equation is 
 J * 
J *
 min H  H *  x,
, t  . Substitutu
t
 x 
ing the system data into the equation, gives
J *
1
J *
2
H l
f   x  r   u2 
u
x
2
x
H
J *
1 J *
 0  u 
. u*  
. Substitute this back into H to form H * .
u
x
 x


2
2
J * 1
1  1 J * 
1
1  J * 
2
2

  x  r    
   x  r 

 ,
t
2
2   x 
2
2  x 
with boundary conditions J *  x T  , T  
2
1
qT  x T   r T   .
2
1
P  t  x 2  g  t  x  w  t  then
2
J * 1 2
J *
 Px  gx  w.
 Px  g . Substitute back into the HJ equation.
t
2
x
1
1
1
1
1
1
2
2
 Px 2  gx  w   x  r  
P 2 x 2  2 Pxg  g 2
 Px  g   x 2  xr  r 2 
2
2
2
2
2
2
2) If we try J *  x, t  

Group terms:


1
1 2  2 
1
1 2
P  1 x    g  r 
Pg  x    w  r 2 
g 0
 P 
2
2
2
2 




.
We want this to hold for all x . Get three equations inside the parentheses. To
2
1
solve for the boundary conditions, expand J *  x T  , T   qT  x T   r T  
2
and equate coefficients.

1
1
P T  x 2 T   g T  x T   w T   qT x 2 T   2 x T  r T   r 2 T 
2
2
P T   qT , g T   qT r T  , w T  
1
qT r 2 T 
2


8
Robust and Adaptive Control
Chapter 3
0 1
0
x    u . The
Exercise 3.1. a) The suspended ball plant model is x  

1 0
1
2
open loop eigenvalues are: det  sI  A  s  1 , i  1 Since the model is in
controllable canonic form, we see that it is controllable. The state feedback control law is u   Kx which has two elements in the gain matrix. The closed loop
system using the state feedback control is x   A  BK  x  Acl x .
1 
 0
 0 1   0
Acl  
   k1 k2   


1 0 1 
1  k1 k2 
1  2
 s
det  sI  Acl   det 
  s  k2 s  k1  1  cl  s 
 1  k1 s  k2 
The poles are to be placed at -1/2, -1, which gives a desired closed loop character1
3
1

istic polynomial cl  s    s  1  s    s 2  s  . Equate the coefficients
2
2
2


with the gains:
3
2
cl  s   s 2  s 
3
K
2
1
 s 2  k2 s  k1  1
2
3
2 
b) Here we need to design a full order observer that has poles at -4, -5. The output
is defined to be y  x1 . C  1 0 .An observability test shows the system is observable with this measurement.
 C   1 0
Q 

CA 0 1 RK 2
A full order observer has the form
xˆ  Axˆ  Bu  K o  y  yˆ 
xˆ   A  K oC  xˆ  Bu  K o y
The poles of the observer,   A  KoC  , are to be placed at -4, -5.
 ko1 1
0 1  ko1 
A  Ko C  
   1 0  


1 0  ko2 
 ko2  1 0 .
 s  ko1 1
2
det  sI  A  KoC   det 
  s  ko1 s  ko2  1
k

1
s

 o2
9
s 2  ko1 s  ko2  1   s  4  s  5   s 2  9s  20, K o   
 21
Using observer feedback for the control u   K xˆ , yields
xˆ   A  B K  K o C  xˆ  K o y, u   K xˆ
 3 3 K   9 
where K  
 , o  21 . Connecting the plant and controller to form the
2 2
 
extended closed loop system, gives
 x  A
 ˆ  
 x    KoC
 BK
  x
A  BK  KoC   xˆ 
Acl
Substituting the matrices to form Acl , results in
1

det  sI  Acl    s  1  s    s  4  s  5
2

.
c) For the reduced order observer, we will implement a Luenberger design that has
C 
the form q  G1q  G2 y  G3u . Choose the matrix C ' to make the matrix  
C '
1
C 
nonsingular. Define     L1
C ' 
and control law are given by:
L2  . Then the reduced order observer matrices
G1  C ' AL2  K r CAL2 , G2  C ' AL2 K r  C ' AL1  K r CAL1  K r CAL2 K r
G3  C ' B  K r CB, xˆ  L2 q   L1  L2 K r  y
10
Robust and Adaptive Control
Where K r is the reduced order observer gain matrix. First we need C ' :
1
 C  1 0 1 0  C 
C '   C '   0 1  , C '   L1
  
 
  
1 
 0
L2   I 2 , L1    ; L2   
0
1 
Substituting into the observer matrices yields
G1  C ' AL2  K r CAL2   K r
G2  C ' AL2 K r  C ' AL1  K rCAL1  K rCAL2 K r  1  K r2
G3  C ' B  K rCB  1
The problem asks to place the observer pole at -6. Thus G1  6 which gives
K r  6 and G2  35 . So, q  6q  35 y  u . To form the state estimate x̂ we
0
1 
use xˆ  L2 q   L1  L2 K r  y    q    y . The control is u   Kxˆ where K is
1
6
computed from part c).
3
u   K  L2 q   L1  L2 K r  y    
2
q  6q  35 y  u  6q  35 y 
1  
3   0
3
21
y
   q    y    q 

6
2   1 
2
2
  
3
21
15
91
q
y  q
y
2
2
2
2
To form the controller as a single transfer function, take the Laplace transform and
combine the above expressions. The block diagram showing the plant and controller is
u
B
 sI  A
1
C
y  x1
s  1
2
15
s
2
21


Exercise 3.2. Use file Chapter3_Example3p5.m to solve this problem.
Exercise 3.3. Use file Chapter3_Example3p5.m to solve this problem.
Exercise 3.4. Use file Chapter3_Example3p5.m to solve this problem.
Chapter 4
Exercise 4.1. Use file Chapter4_Example4p5.m to solve this problem.
Exercise 4.2. Use file Chapter4_Example4p5.m to solve this problem.
Exercise 4.3. Use file Chapter4_Example4p5.m to solve this problem.
Chapter 5
Exercise 5.1. The following code will complete the problem:
% Set up frequency vector
w = logspace(-3,3,500);
% Loop through each frequency and compute the det[I+KH]
% The controller is a constant matrix so keep it outside the loop
K = [ 5.
0.
0. -10. ];
x_mnt = zeros(size(w));% pre-allocate for speed
x_rd = zeros(size(w));
x_sr = zeros(size(w));
% The plant is in transfer function form.
% Evaluate the plant at each frequency
for ii=1:numel(w),
s = sqrt(-1)*w(ii);
d_h = s*(s+6.02)*(s+30.3);
H = [ 3.04/s -278/d_h
;
0.052/s -206.6/d_h ];
L = K*H;
rd = eye(2)+L;
sr = eye(2)+inv(L);
x_mnt(ii) = det(rd);
x_rd(ii) = min(svd(rd));% this is sigma_min(I+L)
since a scalar
x_sr(ii) = min(svd(sr));% this is sigma_min(I+invL)
since a scalar
end
% Compute the min(min(singluar values))
rd_min = min(x_rd);
sr_min = min(x_sr);
12
Robust and Adaptive Control
% For part b) we need to compute the singular value
stability margins
neg_gm = 20*log10( min([ (1/(1+rd_min)) (1-sr_min)]));
% in dB
pos_gm = 20*log10( max([ (1/(1-rd_min)) (1+sr_min)]));
% in dB
pm
=
180*(min([2*asin(rd_min/2)
2*asin(sr_min/2)]))/pi;% in deg
To plot the multivariable Nyquist results, plot the real and imaginary components
of x_mnt and count encirclements.
MNT Problem 5.1a
3
2
Imag
1
0
-1
-2
-3
-3
-2
-1
0
Real
1
2
3
We see no encirclements.
b) Plot the minimum singular values of I  L and I  L1 versus frequency, converting to dB.
Problem 5.1b
80
Min Singular Value (I+L) dB
70
60
50
40
30
20
10
0
-10
-3
10
min = 0.38867
10
-2
10
-1
0
10
10
Frequency (rps)
1
10
2
10
3
Problem 5.1b
40
Min Singular Value (I+inv(L)) dB
35
30
25
20
15
10
min = 0.45266
5
0
-5
-10
-3
10
10
-2
10
-1
0
10
10
Frequency (rps)
1
10
2
10
3
The singular value stability margins are computed using Eq. (5.53) and (5.54).
min   I  L    0.3887 ,
 
min  I  L1
  0.4527. The singular value stabil-
ity margins are: GM = [-5.2348 4.2746] dB, PM = +/- 44.4 deg.
Exercise 5.2. a) Neglect r . From figure write loop equations and form
z   Kx, x  s 1  z  w  , z   Ks 1  z  w 
z
M ,
w
z
K
z

,
M
w sK
w
where M should be the negative of the closed loop transfer function at the plant
input loop break point. The state space model is not unique. Here we choose
K
1
.
AM   K ; BM  1; CM   K ; DM  0. M  s   CM  sI  AM  BM  DM 
sK
z  Ks 1 z   Ks 1 w,
b) For stability we need the pole in the transfer function to be stable. Thus K  0 .
c) From equation (5.92) we plot 1
 M 
:
14
Robust and Adaptive Control
Mag
0 dB

Breaks at  = 1 r/s
d) The small gain theorem (5.92) says that
1
 M 
     . The above plot
shows that as long as   1 this will be satisfied. If we compute the closed loop
transfer function from command to output in problem figure i), we have
T
K 1   
s  K 1   
As long as   1 , the uncertainty cannot create a negative coefficient on the
gain in the denominator.
Exercise 5.3. a) Populate the problem data into the plant (5.8) and controller
(5.18). Then use (1.42) to form the closed loop system.
% Pitch Dynamics
Vm = 886.78;
ZapV = -1.3046; ZdpV
Ma
= 47.7109; Md
Ka
= -0.0015; Kq
az
= 2.0;
aq
=
=
=
=
-0.2142;
-104.8346;
-0.32;
6.0;
w = logspace(-3,3,500);
t = 0.:0.01:2.;
% Plant states are AOA (rad) q (rps)
% Input is fin deflection (rad)
% Output is Accel Az (fps) and pitch rate q (rps)
A = [ ZapV 1.; Ma 0.];
B = [ZdpV; Md];
C = [Vm*ZapV 0.; 0. 1.];
D = [Vm*ZdpV; 0.];
% Controller uses PI elements to close the pitch rate
loop and the Accel
% loop
Ac = [ 0. 0.; Kq*aq 0.];
Bc = [ Ka*az 0.; Ka*Kq*aq Kq*aq];
Cc = [ Kq 1.];
Dc = [ Ka*Kq Kq];
% Form closed loop system
[kl,kl]=size(D*Dc);
Z=inv(eye(kl)+D*Dc);
[kl,kl]=size(Dc*Z*D);EE=eye(kl)-Dc*Z*D;
[kl,kl]=size(Z*D*Dc);FF=eye(kl)-Z*D*Dc;
Acl=[(A-B*Dc*Z*C) (B*EE*Cc);
(-Bc*Z*C) (Ac-Bc*Z*D*Cc)];
Bcl=[(B*EE*Dc);
(Bc*FF)];
Ccl = [(Z*C) (Z*D*Cc)];
Dcl=(Z*D*Dc);
% Step Response
y=step(Acl,Bcl,Ccl,Dcl,1,t);
az = y(:,1);
q = y(:,2);
16
Robust and Adaptive Control
Problem 5.3 Accel Step Response
1.2
1
Settling Time = 0.95
Accel Az (fps)
0.8
Rise Time = 0.39
0.6
0.4
0.2
0
-0.2
0
0
x 10
0.5
-3
1
Time (sec)
1.5
2
Problem 5.3 Accel Step Response
-0.2
-0.4
Pitch Rate q (rps)
-0.6
-0.8
-1
-1.2
-1.4
-1.6
-1.8
-2
0
0.5
1
Time (sec)
1.5
2
b) L  KG, K  Cc  sI  Ac  Bc  Dc , G  C  sI  A  B  D
1
1
% Break the loop at the plant input
A_L = [ A 0.*B*Cc; -Bc*C Ac];
B_L = [ B; Bc*D];
C_L = -[ -Dc*C Cc]; % change sign for loop gain
D_L = -[ -Dc*D];
theta = pi:.01:3*pi/2;
x1=cos(theta);y1=sin(theta);
[re,im]=nyquist(A_L,B_L,C_L,D_L,1,w);
L = re + sqrt(-1)*im;
rd_min = min(abs(ones(size(L))+L));
sr_min = min(abs(ones(size(L))+ones(size(L))./L));
Problem 5.3 Nyquist
3
2
Imaginary
1
0
-1
-2
-3
-3
-2
-1
0
Real
1
2
3
Problem 5.3 Nyquist
5
4
3
Imaginary
2
1
0
-1
-2
-3
-4
-5
-5
0
Real
5
18
Robust and Adaptive Control
Problem 5.3 Return Difference I+L
160
140
120
Magnitude (dB)
100
80
60
40
20
0.90921
0
-20
-3
10
10
-2
10
-1
0
10
10
Frequency (rad/s)
1
10
2
10
3
Problem 5.3 Stability Robustness I+INV(L)
20
Magnitude (dB)
15
10
5
0.75411
0
-5
-3
10
10
-2
10
-1
0
10
10
Frequency (rad/s)
1
10
2
10
3
The singular value stability margins are computed using Eq. (5.53) and (5.54).
% Compute singluar value margins
neg_gm = 20*log10( min([ (1/(1+rd_min)) (1-sr_min)]));
% in dB
pos_gm = 20*log10( max([ (1/(1-rd_min)) (1+sr_min)]));
% in dB
pm
=
180*(min([2*asin(rd_min/2)
2*asin(sr_min/2)]))/pi;% in deg
Min Singular value I+L = 0.90921. Min Singular value I+inv(L) = 0.75411. Singular value gain margins = [-12.1852 dB, +20.8393 dB ]. Singular value phase
margins = +/-44.3027 deg.
c) From Figure 5.23, we want to model the actuator using an uncertainty model at
the plant input : I  1 . Note that we have a negative sign on the summer, which
is different from Figure 5.23. The modified block diagram would be:
z1
+
1
w1
+
K
-
G
+
The problem requires to model the actuator using a first order transfer function

1
. We equate these and solve for the uncertainty:

c  s  1
I  1 
1
1
 s
, 1 
1 
 s 1
 s 1
 s 1
d) Form the M analysis model where the uncertainty is from part c). The M
matrix from (5.70) can be easily modified here (due to the negative feedback). It
is derived as follows:
z1   KG  z1  w1  ,
 I  KG  z1   KGw1 ,
z1    I  KG  KG w1
1
M
M    I  KG  KG
1
e) Here we pick a time constant for the actuator transfer function in part c) and
     . For this problem  I  L1  1
plot 1
, since L is a
 M 
 M 


scalar, and we can reuse the result from b).
Problem 5.3 SGT Actuator Analysis)
20
0
Magnitude (dB)
-20
-40
I+invL min = 0.75411
tau = 0.005 s
tau = 0.02 s
tau = 0.08 s
-60
-80
-100
-120
-3
10
10
-2
10
-1
0
10
10
Frequency (rad/s)
1
10
2
10
3
From the figure we see that an actuator with time constant of 0.08 sec overlaps
(red curve).
Exercise 5.4. Matlab code from example 5.2 can be used here. The Bode plots:
20
Robust and Adaptive Control
Problem 5.4 Bode
80
60
Magnitude (dB)
40
20
0
-20
-40
-3
10
10
-2
10
-1
0
10
10
Frequency (rad/s)
1
10
2
10
3
Problem 5.4 Bode
-80
-100
-120
Phase (deg)
-140
-160
-180
-200
-220
-240
-260
-280
-3
10
10
-2
10
-1
0
10
10
Frequency (rad/s)
1
10
2
10
The Nyquist curve:
Problem 5.4 Nyquist
3
2
Imag
1
0
-1
-2
-3
-3
-2
-1
0
Real
1
2
3
3
Problem 5.4 Return Difference I+L
80
70
Magnitude (dB)
60
50
40
30
20
10
0
-3
10
1
10
-2
10
-1
0
10
10
Frequency (rad/s)
1
10
2
10
3
Problem 5.4 Stability Robustness I+INV(L)
35
30
Magnitude (dB)
25
20
15
10
5
0.61774
0
-5
-3
10
10
-2
10
-1
0
10
10
Frequency (rad/s)
1
10
2
10
3
and singular value plots.
Exercise 5.5. Form the closed loop system model Acl using (1.42).
0.9246
0.0754
0.2357 
 1.4355
 16.3450 36.9126 36.9126 115.3518

Acl  
 3.8189 0.2006 0.2006
0.6270 


2.1126 2.1126
0.6019 
 3.6661
Use (5.72) through (5.81) to isolate the uncertain parameters in Acl and form the
matrices Ei to build
 AM , BM , CM 
.
% Build State space matrices for M(s)
[nxcl,~]=size(Acl);
E1 = 0.*ones(nxcl,nxcl);
E1(:,1) = Acl(:,1);
E1(2,1) = 0.;
E1(3,1) = 0.;
22
Robust and Adaptive Control
[U1,P1,V1]=svd(E1);
b1=sqrt(P1(1,1))*U1(:,1);
a1=sqrt(P1(1,1))*V1(:,1).';
E2 = 0.*ones(nxcl,nxcl);
E2(:,3) = Acl(:,3);
E2(2,3) = 0.;
E2(3,3) = 0.;
[U2,P2,V2]=svd(E2);
b2=sqrt(P2(1,1))*U2(:,1);
a2=sqrt(P2(1,1))*V2(:,1).';
E3 = 0.*ones(nxcl,nxcl);
E3(2,1) = Acl(2,1);
[U3,P3,V3]=svd(E3);
b3=sqrt(P3(1,1))*U3(:,1);
a3=sqrt(P3(1,1))*V3(:,1).';
E4 = 0.*ones(nxcl,nxcl);
E4(2,3) = Acl(2,3);
[U4,P4,V4]=svd(E4);
b4=sqrt(P4(1,1))*U4(:,1);
a4=sqrt(P4(1,1))*V4(:,1).';
% M(s) = Cm inv(sI - Am) Bm
Am=Acl;
Bm=[b1 b2 b3 b4];
Cm=[a1;a2;a3;a4];
0
0 
0
0
-0.7234 0.0519
 1.9842 0
 0


0
4.0429 6.0756 
0
0 1.4539 0 
BM  
, CM  
 0
 -4.0429 0
0
0
0 
0
0




0
0 
0 6.0756 0 
 1.8476 1.4530
 0
Using the model for M  s  compute the structure singular value  and the small
gain theorem bound. Extract the minimum value of 1/  and 1/   M  .
Delta = Diagonal Real Matrix
60
50
min(1/mu) = 0.65242
min(1/sigma(M)) = 0.085765
1/mu and 1/sigm(M) dB
40
30
20
10
0
-10
-20
-30
-1
10
0
1
10
2
10
10
Frequency (rps)
Chapter 6
Exercise 6.1. Use file Chapter6_Example6p1_rev.m to solve this problem.
Exercise 6.2. Use file Chapter6_Example6p2.m to solve this problem.
Exercise 6.3. Use file Chapter6_Example6p3.m to solve this problem.
Chapter 7
Exercise 7.2. Substituting the adaptive controller
 a  kˆ p p  kˆ p
cmd
pcmd
 
 p  p  sgn  L 
kˆ p   p p  p  pref  sgn L a
kˆ pcmd   pcmd pcmd
ref
a
and the reference model pref  aref pref  bref pcmd into the open-loop roll dynamics p  Lp p  La  a , gives the closed-loop system (7.28). To derive (7.29), use
p  pref  e in the error dynamics (7.18), and in the adaptive laws (7.24). Since
the system unknown parameters are assumed to be constant, the second and the
24
Robust and Adaptive Control
third equations in (7.29) immediately follow. The stabilization solution (7.30) is a
special case of (7.28) when the external command is set to zero.
Chapter 8
Exercise 8.2. The trajectories of x   sgn x , starting at x  t0   x0 , can be derived by direct integration of the system dynamics on time intervals where the
t
sign of x  t  remains constant: x  t   x0   sgn x   d  x0   t  t0  sgn x  t  .
t0
Suppose that x T0   0 for some T0 . Then equating the left hand side of the trajectory equation to zero, gives x0  T0  t0  sgn x T0  . Multiplying both sides by
sgn x T0  , results in x0  T0  t0 , and so T0  t0  x0 , which implies that any
two solutions with the same value of x0 become zero at the same time T0 . Finally, it is evident that all solutions are discontinuous at any instant in time T when
x T   0 , and thus the solutions are not continuously differentiable.
t
Exercise 8.3. Rewrite the system dynamics x   x3 as
x
x
3
d  t . Since
0
t
x
0 x3 d 
x t  
xt 

x0
x  t   x02
dx


2
x3
x0
2 x02 t  1
x  t   x02
2
2
then
2
t,
and
consequently
is the system unique trajectory, with the initial condition
x  0   x0 . The system phase portrait is easy to draw. It represents a monotonically strictly increasing cubic polynomial x  f  x    x3 . These dynamics are locally Lipschitz. Indeed, since f  x   f  y   3 x 3  y 3  3 x 2  x y  y 2 x  y then
L
, where L  0 is
3
a finite positive constant, that is the system dynamics are locally Lipschitz. However, the dynamics are not globally Lipschitz since f   x   3 x2 is unbounded.
f  x   f  y   L x  y , for any  x, y  with x 2  x y  y 2 
Exercise 8.4. Suppose that the scalar autonomous system x  f  x  has a unique
non-monotonic trajectory x  t  . Then there must exist a finite time instant t1
where x  t1   f  x  t1    0 . Therefore for all t  t1 , x  t  is constant, which in
turn contradicts the argument of the trajectory being non-monotonic.
Exercise 8.5. If det A  0 then the system x  A x has the unique isolated equilibrium  A x  0   x  0 . Otherwise, the equilibrium set of the system is defined by the linear manifold A x  0 , implying an infinite number of solutions.
Exercise 8.6. Trajectories of the scalar non-autonomous system x  a t  x with
 t

the initial condition x  t0   x0 can be written as x  t   exp    a   d  x0 . The
 t

 0

system equilibrium point is 0 . Suppose that a  t  is continuous in time. In order
to prove stability in the sense of Lyapunov, one needs to show that for any   0
there exists   ,t0  such that  x  t0       x  t     for all t  t0 . Based on
the explicit form of the solution, it is easy to see that boundedness of the system
 t

trajectories is assured if and only if M  t0   sup exp    a   d    . In this
 t

t  t0
 0

case, it is sufficient to select    ,t0  

M  t0 
t
. If lim  a   d   then ast 
t0
ymptotic stability takes place. If in addition, M  t0  is bounded as a function of t0
then   ,t0      can be chosen independent of t0 and the asymptotic stability
becomes uniform.
Exercise 8.7. Since g  y   1 then
x1
x1
V  x    y g  y  dy  x1 x2  x22   y dy  x1 x2  x22 
0

 x1  x2 
2
0
2

x12
 x1 x2  x22
2
2
2
x
 0,   x1 , x2 
2
This proves global positive definiteness of V  x  . Clearly lim V  x    , and so
x 
V  x  is also radially unbounded. Differentiating V  x  along the system trajectories, gives:
26
Robust and Adaptive Control
V  x    x1 g  x1   x2  x1   x1  2 x2  x2 
  x1 g  x1   x2  x2   x1  2 x2  g  x1  x1  x2 
 x1 x2 g  x1   x22  x12 g  x1   x1 x2 g  x1   2 x1 x2 g  x1   2 x22 g  x1 
 x22  x22 g  x1    x12  2 x1 x2  x22  g  x1     x1  x2   0
2
1
1
0
So, V  x  is a Lyapunov function and the origin is globally uniformly stable. To
prove asymptotic stability, consider the set of points E  x1   x2  where
V  x   0 . On this set x1   x1 . If a trajectory gets “trapped” in E then x  0 .
Outside of the set, V  x   0 and consequently trajectories enter E and then go to
the origin. These dynamics can also be explained using LaSalle’s Invariance Principle [2] for autonomous systems. The principle claims that the system trajectories
will approach the largest invariant set in E . Here, this set contains only one point
– the origin. The overall argument proves global asymptotic uniform stability of
the system equilibrium.
Exercise 8.8. Due to the presence of the sign function, the system dynamics are
discontinuous in x and thus, sufficient conditions for existence and uniqueness do
not apply. However, the system solutions can be defined in the sense of Filippov
[2]. This leads to the notion of the “sliding modes” in dynamical systems. Consider the manifold s  x   x1  x2 , whose dynamics along the system trajectories are:
s  x   x1  x2   sgn  x1  x2     sgn s  x 
In Exercise 8.2, the same exact dynamics were considered. It was shown that starting from any initial conditions, the system trajectories s  x  t   reach zero in finite
time. In terms of the original system, it means that the system state x  t  reaches
the linear manifold s  x  t    0 in finite time. On the manifold x1 t    x2 t  ,
and so x1   x1 , which in itself implies that after reaching the manifold, the trajectories “slide” down to the origin.
Exercise 8.9. Let f  x  : R  R represent a scalar continuously differentiable
function whose derivative is bounded: f   x   C . Then for any x1 and x2 there
exists    x1 , x2  such that
f  x1   f  x2   f    x1  x2  C x1  x2

and so f  x1   f  x2    , for any x1  x2 
C
nuity of f  x  on R .
  . This proves uniform conti-
We now turn our attention to proving Corollary 8. 1, where by the assumption, a
scalar function f  t  is twice continuously differentiable, with bounded second
derivative f  t  . Therefore, the function first derivative f  t  is uniformly continuous. In addition, it is assumed that f  t  has a finite limit, as t   . Then
due to Barbalat’s Lemma, f  t  tends to zero, which proves the corollary.
Exercise 8.10. Substituting u  Kˆ  t  x  K r r  t  into the system dynamics, gives

x  A x  b  Kˆ  t   K xT


K


T

x  K r r t  


where A is Hurwitz, and K  Kˆ  K x is the feedback gain estimation error. The
desired reference dynamics are xref  A xref  b K r r . Subtracting the latter from
the system dynamics, results in e  Ae  b K T x , with the state tracking error
e  x  xref . Also, the feedback gain dynamics can be written as Kˆ   x eT P b .
For the closed-loop system error system,
e  Ae  b K T x
Kˆ   x eT P b
1
consider a Lyapunov function candidate V  e, K   eT Pe  K T K . Clearly,

this function is positive definite and radially unbounded. Its derivative along the
closed-loop system error dynamics is
1
V  e, K   eT P e  eT P e  2 K T Kˆ

  Ae  b K x  P e  e P  Ae  b K T x   2 K T x eT P b
T
T
T
 eT  AT P  P A  e   eT Q e  0
Q
28
Robust and Adaptive Control
This proves: a) Uniform stability of the system error dynamics; and b) Uniform
boundedness of the system errors, that is  e, K   L . Since the external input
r  t  is bounded then xref  L , and so is x   xref  e   L . Then u  L and
consequently, the error and the state derivatives are uniformly bounded as well,
 e, x   L . Since V  e, K    2 eT Qe  L then V  e  t  , K  t   is a uniformly
continuous function of time. Note that V is lower bounded and its time derivative
is non-positive. Hence, V tends to a limit, as a function of time. Using Barbalat’s
Lemma, we conclude that V  e  t  , K t     eT Qe  0 , which is equivalent to
t 
e  t   0 . The latter proves bounded asymptotic command tracking ability of
t 
the controller, that is x  t   xref  t  , for any bounded external command r  t  .
t 
Consequently, C x  y  t   yref  t   C xref  t  . Finally, if K r  C A1 b then
t 
the DC gain of the reference model is unity. In this case, both the system state x
and the reference model state xref will asymptotically track any external bounded
command r with zero steady-state errors.
Chapter 9
Exercise 9.1. Let some of the diagonal elements i of  in the system dynamics
(9.40) be negative and assume that the signs of them are known. Consider the
modified Lyapunov function (9.43),
V  e, K x , K r ,  

 eT P e  tr  K xT  x 1 K x  K rT  r1 K r  T  1   

where  denotes the diagonal matrix with positive elements i . It is easy to
see that this function is positive definite and radially unbounded. Let
sgn   diag sgn 1 , , sgn m  . Then    sgn    . Repeating derivations
(9.55) through (9.58) with B sgn  in place of B , gives the modified adaptive
laws.
Chapter 10
Exercise 10.1. We need to prove that
1
Gref  0  C Aref
Bref  I mm
with a Hurwitz matrix Aref  A  B  K xT and
 0mm
A
 0n  m
 p
0
So, Aref   mm
 
Cp 
  I mm 
 0mm 
, B  

 , Bref   0
Ap 
 Bp 
 n p m 
Cp 
 , where “  ” indicates “do not care” elements. Denote
 
M 
1
M  Aref
Bref   1 
 M2 
where M1 , M 2 are of the corresponding dimensions. Then
  I mm 
 0mm

  Bref  Aref M  
0
 
 n p m 
C p   M1   C p M 2 



  M2    
and thus C p M 2   I mm . Finally,
1
Gref  0   C Aref
Bref    0mm
M
M 
C p   1   C p M 2  I mm
 M2 
and so, the reference model DC gain is unity.
ˆ T   x  , the
Exercise 10.2. Starting with the adaptive controller u  Kˆ xT x  
p
adaptive law Kˆ x   x x eT P B
Kˆ  0    K . Then
x
is initialized at the baseline gain values
x
t


Kˆ x   K x    x  x   eT   d P B    K x   Kˆ x
0


 Kˆ x
where  Kˆ x represents an adaptive incremental gain whose adaptive law dynamics
are the same as for the original adaptive gain Kˆ . The resulting total control input,
x
30
Robust and Adaptive Control



ˆ T   x    K   Kˆ x  
ˆ T   x   u   Kˆ x  
ˆ T x 
u  Kˆ xT x  
p
x
x
p
bl
x
p

uad
represents an adaptive augmentation of a baseline linear controller ubl   K xT x .
Exercise 10.3. Consider the error dynamics e  Aref e  B  T   ubl , x p  and
assume that the extended regressor   ubl , x p  is continuously differentiable in its
arguments. Then

   ubl , x p 
   ubl , x p  
T
e  Aref e  B     t    ubl , x p   T
ubl  T
xp 


 ubl
 xp



T
 T
 
 Aref e  B       eT Pref B    T
K x x  T
xp 

 ubl
 x p 

The first term in the right hand side of the equation is uniformly bounded. Also,
all the functions in the second term are uniformly bounded. Therefore e  t   L
and e  t  is a uniformly continuous function of time. Since in addition e  t  tends
to zero then using Barbalat’s Lemma we conclude that e  t  tends to zero as well.
Then the error dynamics imply: lim T  t    ubl  t  , x  t    0 .
t 
Chapter 11
Exercise 11.1. The Projection Operator, as defined in (11.37), is continuous and
differentiable in its arguments. Consequently, it is locally Lipschitz.
Exercise 11.3. Since
     Proj , y   y    
 T
i
 i   Proji  , y   yi 
i
it is sufficient to show that i  i   Proji  , y   yi   0 . By the assumption,
imin    i  imax   . Then

i
 i   Proji  , y   yi 
0

0
 0
0

  max   i   
i  i   i
 yi ,





0
  0
min
 

  i  i






 yi ,
 i i




0, otherwise





i   imax      yi  0   



0
min
i   i      yi  0   






The above inequality allows one to use the adaptive laws (11.53) and carry out the
proof of the system UUB properties, while repeating the exact same arguments as
in Section 11.5.
Chapter 12
Exercise 12.2. There are several ways to embed a baseline linear controller into
the overall design and then turn the adaptive system from Table 12.1 to act as an
augmentation to the selected baseline controller. Our preferred way to perform
such a design starts with the assumed open-loop system linear dynamics without
the uncertainties. A robust linear feedback controller ubl   K x x  Kr r can be
designed for the linear system. Then the corresponding closed-loop linear system
dynamics become the reference model for the adaptive system to achieve and
maintain in the presence of uncertainties. So in (12.14), the reference model matrices  Aref , Bref  are defined as the closed-loop linear system matrices achieved
under the selected linear baseline controller. The total control input is defined similar to (12.16)
uad


T
T

ˆ
ˆ
u  ubl  K x x     x    1    x   Kˆ rT r    x  usl


ur


ux
Repeating the derivations from Section 10.3, the open-loop system with the imbedded baseline controller becomes very similar to (10.47, 10.48). In this case, the
adaptive component will depend on the baseline controller, as shown in (10.51).
Starting from (12.24) and repeating the same design steps will result in adaptive
32
Robust and Adaptive Control
laws similar to (12.37). The only difference here will be in the addition of adaptive
dynamics on the baseline gains, as shown in (10.66).

Alternatively, one can choose to initialize the adaptive gains Kˆ x , Kˆ r
 at the cor-
responding gain values  K x , K r  of the selected baseline controller, and then use
the arguments from Exercise 10.2 to justify the design. However, in practical applications this approach may result in unnecessary numerical complications if the
gains of the baseline controller are scheduled to depend on the system slowly
time-varying parameters.
Exercise 12.4. First, set   I mm , f  x p   0m1 ,   t n1  0 and design a baseline linear feedback controller in the form ubl   K x x for the assumed nominal
linear open-loop dynamics. Note, that due to the inclusion of the integrated output
tracking error system into the design, the baseline linear controller has “classical”
 y  y
(Proportional + Integral) feedback connections: ubl  K I cmd
 K p xp .
s
Second, form the reference system to represent the closed-loop baseline linear dynamics with the embedded baseline linear controller.
  I m m 
xref   A  B K xT  xref  
y
 0 n  mm  cmd


A
ref
Bref
Third, define the total control input without the adaptive component on the external command.
uad


T

ˆ
ˆ T   x    1    x   y    x  u
u  ubl  K x x  
cmd
sl




ux
Finally, follow the rational from Exercise 12.2, and repeat the design steps from
Section 12.4, starting with the modified controller definition.
Chapter 13
Exercise 13.1. Differentiating the error dynamics (13.5), gives

e  aref e  b kˆx x  k x x  kˆr r  kr r

 aref e  b   x e x 2  k x x   r e r 2  kr r 
If r  t  is uniformly bounded in time  r  L  then e  L , since it has already
been proven that all signals in the right hand side of the above equation are uniformly bounded. Therefore, e  t  is a uniformly continuous function of time. Also
it was proven that lim e  t   0 . Using Barbalat’s Lemma, implies lim e  t   0 ,
t 
and so, the signal   t  in the error dynamics (13.5),
t 
e  aref e  b  k x x  kr r 
 t 
must asymptotically tend to zero.
Exercise 13.2. Consider the system (13.31).
 z  A z   f  z, t ,  
with Hurwitz A . The system solution is:
z t   e
A
t

t
z0   e
A
t  

f  z   , ,   d
0
From an engineering perspective, these dynamics can be viewed as a linear stable
filter with  f  z, t ,   as the input. So, if the latter decays to zero then the state of
the filter will do the same. This argument can also be proven formally using the
control-theoretic arguments from the given reference.
Exercise 13.3. Use the results from Exercise 13.1. In this case, the error dynamics
(13.101) does not explicitly depend on the external command ycmd  t  . Without
assuming continuity or even differentiability of the latter, one can differentiate the
error dynamics (13.101), show that e  L and then repeat the arguments from
Exercise 13.1 to prove that lim   t   0 .
t 
34
Robust and Adaptive Control
Chapter 14
Exercise 14.1. Suppose that all states are accessible, i.e., C  I nn . Disregarding
the Projection Operator, the output feedback (baseline + adaptive) control input,
the adaptive laws, and the corresponding reference model (aka, the state observer)
become (see Table 14.1):
ˆ T  xˆ, u
u  ubl  
 bl 
1

ˆ  Proj  
ˆ
T 

 ,     xˆ, ubl  x  xˆ  R0 2 W S 


xˆ  Aref xˆ  Lv  x  xˆ   Bref zcmd
At the same time, a state feedback MRAC system can be written as, (see Table
10.2):
ˆ T  x, u
u  ubl  
 bl 

ˆ  Proj 
ˆ ,   x, u

 bl   x  xref  P B


xref  Aref xref  Bref zcmd
Clearly, the main difference between the two systems is in the use of the state observer for the former, as oppose to the “ideal” reference model for the latter. Another difference consists of employing the estimated / filtered state feedback in the
observer-based adaptive system versus the original system state feedback in
MRAC. As it was formally shown in Chapters 13 and 14, these two design features give the observer-based adaptive system a distinct advantage over the classical MRAC: It is the ability to suppress undesirable transients.
Exercise 14.2. Instead of using (14.14), consider the original dynamics (14.7)
x  A x  B   u  T   x    Bref zcmd ,
y  C x,
z  Cz x
and the state observer,


ˆ u
ˆ T   xˆ   L  y  yˆ   B z ,
xˆ  A xˆ  B 
v
ref cmd
where
 ˆ  R
control input,
m m
ˆ  R N m
,

yˆ  C xˆ
are the parameters to be estimated. Choosing the
ˆ T   xˆ 
u  
gives the linear state observer dynamics.
xˆ  A xˆ  Lv  y  yˆ   Bref zcmd ,
yˆ  C xˆ
Subtracting the observer form the system, results in the observer error dynamics,

ˆ T   xˆ   T   x 
ex   A  Lv C  ex  B  

which are very similar to (14.24). Repeating all the design steps after (14.24) will
result in the pure (no baseline included) adaptive laws alike (14.47).
1
ˆ  Proj  
ˆ ,     xˆ  eT R  2 W S T 

y
0



with ey  yˆ  y  C  xˆ  x   C ex . The rest of the design steps mirror the ones in
Section 14.4.
Exercise 14.3. For the open-loop system,
x  A x  B   u  K T x  ,
y  C x,
z  Cz x
with an unknown strictly positive definite diagonal matrix   I mm , there exists a
control input in the form
u   K x  K  x  K z z
T
with unknown gains  K , Kx   Rnm and K z  R mm that would have resulted in
the desired closed-loop stable reference dynamics,
x   A  B  K xT  x  B  K zT z
Aref
where  Aref , Bref

Bref
are the two known matrices that define the desired closed-loop
dynamics. This is an existence-type statement. Its validity is predicated on the fact
that the pair  A, B  is controllable and the two desired matrices are chosen to satisfy the Matching Conditions (MC): Aref  A  B  K xT , Bref  B  K zT . The MC
36
Robust and Adaptive Control
impose restrictions on the selection of achievable dynamics for the original system, which in turn can be rewritten as:



T
x  Aref x  B   u   K   K x  x  K zT


 K  K x T  K zT   xz 

T
 x , z 

 Aref x  B   u  T   x, z    Bref z



z   Bref z




 K  K x 
 n  m  m
where   
denotes the aggregated set of unknown parame R

K

z

ters in the system. With that in mind, consider the state observer


ˆ u
ˆ T   xˆ, z   L  y  yˆ  ,
xˆ  Aref xˆ  B 
v
yˆ  C xˆ
ˆ  R n  mm is the vector of to-be-estimated parameters. Choosing,
where 
ˆ T   xˆ, z 
u  
yields the linear time-invariant state observer dynamics.
xˆ  Aref xˆ  Lv  y  yˆ   Bref z,
yˆ  C xˆ
With the state and output errors defined as in (14.23),
ey  yˆ  y  C  xˆ  x   C ex
ex
the observer error dynamics

ˆ T   xˆ, z   T   x, z 
ex   Aref  Lv C  ex  B  

are in the form of (14.24), with z in place of ubl and Aref instead of A . Repeating the arguments starting from (14.24) will give stable output feedback observerbased adaptive laws in the form of (14.47),
1
ˆ  Proj  
ˆ ,     xˆ, z  eT R  2 W S T 


y
0


along with the associated proofs of stability and bounded tracking performance.
Download