Uploaded by kevin

lsp-sm compress

advertisement
SOLUTIONS MANUAL
for
A LINEAR SYSTEMS PRIMER
P.J. Antsaklis and A.N. Michel
Prepared by
Panos J. Antsaklis
Anthony N. Michel
c 2007 Birkhaüser Boston
iii
CONTENTS
Preface
(v)
Chapter 1 Solutions
Exercises 1.1–1.14
1
Chapter 2 Solutions
Exercises 2.1–2.10
15
Chapter 3 Solutions
Exercises 3.1–3.38
21
Chapter 4 Solutions
Exercises 4.1–4.25
41
Chapter 5 Solutions
Exercises 5.1–5.12
47
Chapter 6 Solutions
Exercises 6.1–6.13
53
Chapter 7 Solutions
Exercises 7.1–7.15
59
Chapter 8 Solutions
Exercises 8.1–8.12
67
Chapter 9 Solutions
Exercises 9.1–9.15
73
Chapter 10 Solutions
Exercises 10.1–10.5
81
Birkhäuser
Solutions Manual to accompany
A LINEAR SYSTEMS PRIMER
This Solutions Manual corresponds to the first printing of the book. Subsequent printings of the main
textbook may include corrections.
©2007 Birkhäuser Boston
All rights reserved. This work may not be translated or copied in whole or in part without the
written permission of the publisher (Birkhäuser Boston, c/o Springer Science+Business Media Inc.,
233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with
reviews or scholarly analysis. Use in connection with any form of information storage and retrieval,
electronic adaptation, computer software, or by similar or dissimilar methodology now known or
hereafter developed is forbidden.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if they
are not identified as such, is not to be taken as an expression of opinion as to whether or not they are
subject to proprietary rights.
Printed in the United States of America
987654321
www.birkhauser.com
Instructor’s Guide-Exercises
P.J. Antsaklis and A.N. Michel,
A Linear Systems Primer, Birkhaüser Boston 2007.
A number of exercises are designed so that the student must use a computer software package
such as Matlab. In certain of these exercises the student is explicitly asked to write software code
to implement certain algorithms developed in the book. In other exercises the demands of numerical
calculations encourage students to use software packages to carry out these calculations and plot the
results.
There are several exercises where system descriptions and properties developed in this book are
used to study physically motivated systems from areas such as Electrical, Aero, Mechanical, and
Civil Engineering. These are:
In
In
In
In
In
Chapter
Chapter
Chapter
Chapter
Chapter
1,
2,
3,
5,
6,
Exercises 1.1, 1.2, 1.12c, 1.13, 1.14.
Exercises 2.1, 2.2, 2.3, 2.4.
Exercises 3.37, 3.38.
Exercises 5.2, 5.7.
Exercise 6.13.
Several exercises are also used to introduce additional material which, although important, is
not essential to the material in this book. These include:
In Chapter 1, Exercises
1.1 Hamiltonial dynamical system.
1.2 Lagrange’s equation.
1.4, 1.5, 1.6 Numerical solutions of ODE (Euler’s, Runge-Kutta,
Predictor-Corrector methods).
In Chapter 3, Exercises
3.15, 3.16 Ajdoint equation.
3.32, 3.36 Sampled data systems.
3.33 Realization of scalar H(s) (more in Chapter 8).
3.34 Markor Parameters (more in Chapter 8).
3.35 Frequency response
In Chapter 5, Exercises
5.5, 5.6 Output reachability (controllability).
5.8 Output function controllability and right inverse of H(s);
input function observability and left inverse of H(s).
5.12 Determining the state from input and output (more in Chapter 9).
In Chapter 6, Exercise
6.12 Sampled data system controllability and observability.
In Chapter 7, Exercise
7.9 Poles and zeros at infinity.
v
vi
INSTRUCTOR’S GUIDE-EXERCISES
In Chapter 8, Exercise
8.11 Systems in series, with uncertainty, controllability, stability.
In Chapter 9, Exercises
9.1 Different responses, same eigenvalues.
9.2 Pole assignment; single-input controllability.
9.5 Effect of state feedback on zeros.
9.6 Model matching.
9.10, 9.11 LQR.
9.13 Disturbance rejection.
9.15 Zero blocking property.
In Chapter 10, Exercises
10.4 Diagonal decoupling.
10.5 Model matching via state feedback.
Finally, we would like to thank our students for helping develop these solutions, especially
Xenofon Koutsoukos currently of Vanderbilt University, who was instrumental in putting together
the original Solutions Manual of the Linear Systems book. Special thanks go to Eric Kuehner for
his help in typing this manuscript.
Panos J. Antsaklis
Anthony N. Michel
Department of Electrical Engineering
University of Notre Dame
Notre Dame, Indiana 46556, USA
Chapter 1
Exercise 1.1
(a) Choose q1 = x1 , q2 = x2 .
We have p1 = M1 ẋ1 , p2 = M2 ẋ2 .
Kinetic energy:
T (q, q̇) =
1
(M1 ẋ21 + M2 ẋ22 ).
2
Potential energy:
W (q) =
1
[K1 x21 + K(x1 − x2 )2 + K2 x22 ].
2
The Hamiltonian:
H(p, q) =
1
1
1
M1 ẋ21 + M2 ẋ22 + K1 x21
2
2
2
1
1
+ K(x1 − x2 )2 + K2 x22 .
2
2
Hamiltonian formulation (Eq. (1.9b)):
∂H
⇒ ẋi = ẋi (i = 1, 2)
(always identity)
∂pi
∂H
M1 ẍ1 = −K1 x1 − K(x1 − x2 )
⇒
ṗi = −
M2 ẍ2 = −K(x2 − x1 ) − K2 x2 .
∂qi
q̇i =
The system of first-order differential equations:
Let y1 = x1 , y2 = ẋ1 , y3 = x2 , y4 = ẋ2

 
0
ẏ1
 ẏ2   − K+K1
M1
 
⇒
 ẏ3  = 
0
K
ẏ4
M2
with
and
1
0
K
0
M1
0
0
2
0 − K+K
M2
y1 (0) = x1 (0),
y2 (0) = ẋ1 (0),
y3 (0) = x2 (0),
y4 (0) = ẋ2 (0)
 
0
y1
 y2
0 
 
1   y3
y4
0




given .
Newton’s Laws:
⇒
M1 ẍ1 = −K1 x1 − K(x1 − x2 )
M2 ẍ2 = K(x1 − x2 ) − K2 x2
1
(same equations!)
2
CHAPTER 1.
(b) Newton’s Secon Law of Motion:
mlθ̈ + (m)(g) sin θ = 0
θ̈ + (g/l) sin θ = 0
x1 = θ, x2 = θ̇
ẋ1 = x2
ẋ2 = −(g/l) sin x1
1
T (q, q̇) = m(lθ̇)2
2
Z θ
W (q) =
mgl sin η dη = mgl(1 − cos θ)
(Kinetic Energy)
(Potential Energy)
0
H(p, q) =
1 1
(mlθ̇)2 − mgl(1 − cos θ)
2m
(Hamiltonian)
Let q = lθ and p = mlθ̇.
∂H
∂p
∂H
∂(lθ)
∂H
∂H
1 1
(2)mlθ̇ = lθ̇ = q̇ ⇒
= q̇
=
2
m
∂p
∂(mlθ̇)
∂H ∂θ
1
= −
= − mgl(sin θ) = −mg sin θ = ṗ = mlθ̈
∂θ ∂(lθ)
l
ṗ = −mg sin θ = mlθ̈
θ̈ = − gl sin θ
=
This checks with the equations obtained by Newton’s Law of Motion.
(c) Tangential velocity of m1 : l1 θ̇1
Tangential velocity of m2 : [l12 θ̇12 + l22 θ̇22 + 2l1 l2 θ̇1 θ̇2 cos(θ1 − θ2 )]1/2 .
Kinetic energy:
T =
1
1
m1 (l1 θ̇1 )2 + m2 [l12 θ̇12 + l22 θ̇22 + 2l1 l2 θ̇1 θ̇2 cos(θ1 − θ2 )].
2
2
Potential energy:
W = −m1 gl1 cos θ1 − m2 g(l1 cos θ1 + l2 cos θ2 ) + m1 gl1 + m2 g(l1 + l2 )
Use the Lagrangian formulation.
Lagrangian:
L=T −W
1
1
1
m1 l12 θ̇12 + m2 l12 θ̇12 + m2 l22 θ̇22
2
2
2
+ l1 l2 m2 θ̇1 θ̇2 cos(θ1 − θ2 ) + m1 gl1 cos θ1
+ m2 gl1 cos θ1 + m2 gl2 cos θ2
=
Lagrange’s equations:
d
dt
∂L
∂ θ̇i
−
∂L
= 0,
∂θi
i = 1, 2
3
We have:
∂L
∂ θ̇1
= m1 l12 θ̇1 + m2 l12 θ̇1 + l1 l2 m2 θ̇2 cos(θ1 − θ2 )
=
(m1 + m2 )l12 θ̇1 + m2 l1 l2 θ̇2 cos(θ1 − θ2 )
∂L
= m2 l22 θ̇2 + m2 l1 l2 θ̇1 cos(θ1 − θ2 )
∂ θ̇2
∂L
= −m1 gl1 sin θ1 − m2 gl1 sin θ1 − l1 l2 m2 θ̇1 θ̇2 sin(θ1 − θ2 )
∂θ1
∂L
= −m2 gl2 sinθ2 + l1 l2 m2 θ̇1 θ̇2 sin(θ1 − θ2 )
∂θ2
d ∂L
(
) = (m1 + m2 )l12 θ̈1 − m2 l1 l2 θ̈2 cos(θ1 − θ2 )
dt ∂ θ̇1
−m2 l1 l2 θ̇2 sin(θ1 − θ2 )(θ̇1 − θ̇2 )
= (m1 + m2 )l12 θ̈1 + m2 l1 l2 θ̈2 cos(θ1 − θ2 )
−m2 l1 l2 θ̇1 θ̇2 sin(θ1 − θ2 ) + m2 l1 l2 θ̇22 sin(θ1 − θ2 )
d ∂L
(
) = m2 l22 θ̈2 + l1 l2 m2 θ̈1 cos(θ1 − θ2 )
dt ∂ θ̇2
−l1 l2 m2 θ̇1 sin(θ1 − θ2 )θ̇1 − θ̇2
= m2 l2 θ̈2 + l1 l2 m2 cos(θ1 − θ2 )(θ̈1 )
−l1 l2 m2 sin(θ1 − θ2 )θ̇12 + l1 l2 m2 sin(θ1 − θ2 )θ̇1 θ̇2 .
Lagrangian Formulation:
(m1 + m2 )l1 θ̈1 + m2 l2 θ̈2 cos(θ1 − θ2 ) + m2 l2 θ̇22 sin(θ1 − θ2 ) + (m1 + m2 )g sin θ1 = 0
θ̈2 + l1 θ̈1 cos(θ1 − θ2 ) − l1 θ̇12 sin(θ1 − θ2 ) + g sin θ2 = 0.
Let x1 = θ1 , x2 = θ̇1 , x3 = θ2 , x4 = θ̇2 .
System of first-order differential equations:
ẋ1
= x2
ẋ2
=
ẋ3
ẋ4
f (x)
(m1 + m2 )l1 − m2 l1 l2 cos2 (x1 − x3 )
= x4
g(x)
=
m2 l1 l2 cos2 (x1 − x3 ) − (m1 + m2 )l1
where
f (x1 , x2 , x3 )
= l2 m2 g sin x3 cos(x1 − x3 ) − l2 m2 l1 x22 sin(x1 − x3 ) cos(x1 − x3 )
− m2 l2 x24 sin(x1 − x3 ) − (m1 + m2 )g sin x1
and
g(x) = l1 f (x1 , x2 , x3 ) cos(x1 − x3 ) − [(m1 + m2 )l1 − m2 l1 l2 cos2 (x1 − x3 )][l1 x22 sin(x1 − x3 ) − g sin x3 ]
with x1 (0) = θ1 (0), x2 (0) = θ̇1 (0), x3 (0) = θ2 (0) and x4 (0) = θ̇2 (0) given.
This system of equations can also be obtained by using Newton’s laws.
4
CHAPTER 1.
Exercise 1.2
(a) Choose q1 = y1 , q2 = y2 , q T = (q1 , q2 ).
1
1
M1 (ẏ1 )2 + M2 (ẏ2 )2 ,
2
2
1
1
1
K1 y12 + K2 y22 + K(y1 − y2 )2 ,
W (q) =
2
2
2
1
1
1
2
B1 (ẏ1 ) + B2 (ẏ2 )2 + B(ẏ1 − ẏ2 )2 ,
D(q̇) =
2
2
2
F1 (t) = f1 (t), F2 (t) = −f2 (t).
T (q, q̇)
=
The Lagrangian assumes the form
L(q, q̇)
=
1
1
1
1
M1 (ẏ1 )2 + M2 (ẏ2 )2 − K1 y12 − K2 y22
2
2
2
2
1
2
− K(y1 − y2 ) .
2
We now have
∂L
∂ ẏ1
d ∂L
(
)
dt ∂ ẏ1
∂L
∂y1
∂D
∂ ẏ1
∂L
∂ ẏ2
d ∂L
(
)
dt ∂ ẏ2
∂L
∂y2
∂D
∂ ẏ2
= M1 ẏ12 ,
= M1 ÿ1 ,
= −K1 y1 − K(y1 − y2 ),
= B1 ẏ1 + B(ẏ1 − ẏ2 ),
= M2 ẏ2 ,
= M2 ÿ1 ,
= −K2 y2 − K(y2 − y1 ),
= B2 ẏ2 + B(ẏ2 − ẏ1 ).
In view of Lagrange’s equation we obtain
M1 ÿ1 + (B + B1 )ẏ1 + (K + K1 )y1 − B ẏ2 − Ky2
M2 ÿ2 + (B + B2 )ẏ2 + (K + K2 )y2 − B ẏ1 − Ky1
= f1 (t)
= −f2 (t).
These equations are clearly in agreement with equation (1.26) of Section 1.4. Letting x1 =
y1 , x2 = ẏ1 , x3 = y2 , x4 = ẏ2 , the above equations can be expressed equivalently by the system of
first-order differential equations given in (1.27).
5
(b) By inspection of Figure 1.14 in the textbook we have
T
=
W
=
=
L =
D
=
1
1
L(q̇)2 + M (ẋ)2 ,
2
2
1
1
(q0 + q)2 + K(x1 + x)2
2C
2
1
1
(x0 − x)(q0 + q)2 + K(x1 + x)2 ,
2A
2
1
1
1
1
L(q̇)2 + M (ẋ)2 −
(x0 − x)(q0 + q)2 − K(x1 + x)2 ,
2
2
2A
2
1
1
R(q̇)2 + B(ẋ)2 .
2
2
This is a two-degree-of-freedom system, where one of the degrees of freedom is displacement x
of the moving plate and the other degree of freedom is current i = q̇. From Lagrange’s equation we
obtain
1
(q0 + q)2 + K(x1 + x)
2A
1
Lq̈ + Rq̇ +
(x0 − x)(q0 + q)
A
M ẍ + B ẋ −
= f (t),
= v0 ,
or
M ẍ + B ẋ + Kx − c1 q − c2 q 2
Lq̈ + Rq̇ + [x0 /(A)]q − c3 x − c4 xq
= F (t),
= V,
where c1 = q0 /(A), c2 = 1/(2A), c3 = q0 /(A), c4 = 1/(A), F (t) = f (t) − Kx1 + [1/(2A)]q0 ,
and V = v0 − [1/(A)]q0 .
If we let y1 = x, y2 = ẋ, y3 = q, and y4 = q̇, we can represent the above equations equivalently
by the system of equations

 
0
ẏ1
 ẏ2   − K

  M
 ẏ3  =  0
c3
ẏ4
L
1
B
−M
0
0
0
c1
M
0
y0
− (AL)

0
y1
 y2
0 

1   y3
−R
y4
L
 
0
0
  ( c2 )y32   ( 1 )F (t)
+ M
+ M
 
 
0
0
c4
( L )y1 y3
( L1 )V






To complete the description of this initial-value problem, we need to specify the initial data
x(0) = y1 (0), ẋ(0) = y2 (0), q(0) = y3 (0), and q̇(0) = i(0) = y4 (0).
The same equations can be obtained using Newton’s Second Law of (rotational) Motion and
Kirchhoff’s Voltage Law.
6
CHAPTER 1.
(c) Choose q1 = θ, q̇ = θ̇, q̇2 = ia , q = (q1 , q2 )T . Then
L(q, q̇)
∂L
∂ q̇1
d ∂L
(
)
dt ∂ θ̇
∂L
∂q1
∂L
∂ q̇2
d ∂L
(
)
dt ∂ia
∂L
∂q2
D(q1 , q2 )
=
=
= J θ̈ = J q¨1
=
0
∂L
= Lia = Lq̇2
∂ia
dia
= L
dt
=
=
=
=
∂D
∂ q̇1
∂D
∂ q̇2
d ∂L
(
)
dt ∂ q̇i
∂D
∂ q̇1
∂D
∂ q̇i
f1 (t)
f2 (t)
1
1
1
1
J(q̇1 )2 + La (q̇2 ) = J(θ̇)2 + La (ia )
2
2
2
2
∂L
= J θ̇ = J q̇1
∂ θ̇
0
1
1
Ra (q̇2 )2 + B(q̇1 )2
2
2
1
1
Ra (ia )2 + B(θ̇1 )2
2
2
= B q̇1 = B θ̇
= Ra ia
−
∂L
∂D
+
= Fi , i = 1, 2
∂ q̇i
∂ q̇i
= B q̇1 = B θ̇
= Ra ia
= T (t) = KT ia (t)
= ea (t) − em (t)
= ea (t) − K θ̇(t)
J θ̈ + B θ̇
L
dia
+ Ra ia
dt
= T (t) = KT ia
= ea − em = ea − K θ̇
or
θ̈ + (B/J)θ̇
=
(KT /J)ia
dia
+ (ra /L)ia + (K/La )θ̇ = (1/La )ea
dt
These equations agree clearly with equations (1.31) and (1.32) (see Example 1.3).
Letting x1 = θ, x2 = θ̇, and x3 = ia , we obtain from the above equations the system of first-order
ordinary differential equations given by (1.33) (see Example 1.3).
Exercise 1.3. (a) The initial-value problem
ẋ = x
x(0) = 1, ẋ(0) = 2
7
has no solution.
(b) The initial-value problem
2
ẋ = 32 x 3
x(0) = 0
has solutions φ(t, 0, 0) = 0, φ(t, 0, 0) = t3 , etc. Hence, it has more than one solution.
(c) The initial-value problem
has the solution x =
1
1−t ,
ẋ = x2
x(0) = 1
which cannot be continued for all t ∈ R.
(d) The initial-value problem
ẋ = 3t + 1
x(0) = 1
has a unique solution for all t ∈ R.
Exercises 1.4, 1.5, 1.6.
The solutions of the initial-value problems using Euler’s, Runge–Kutta, and predictor–corrector
methods are shown in Figures 1.1 and 1.2.
Figure 1.1: Solution of the initial-value problem ẏ = 3y, y(t0 ) = 0
8
CHAPTER 1.
Figure 1.2: Solution of the initial-value problem ÿ = t(ẏ)2 , y(t0 ) = 1, ẏ(t0 ) = 0
Exercise 1.7.
φ0 (t)
= x0
φ1 (t)
= x0 +
Z
t
(aφ0 (s) + s) ds
t0
1
= x0 + ax0 (t − t0 ) + (t − t0 )2
2
Z t
φ1 (t) = x0 +
(aφ1 (s) + s) ds
t0
a
1
a2 x0
(t − t0 )2 +
(t − t0 )3 + (t − t0 )2
= x0 + ax0 (t − t0 ) +
2!
2·3
2
.
.
.
Z t
φm+1 (t) = x0 +
(aφm (s) + s) ds
t0
a3 x0
a2 x0
(t − t0 )2 +
(t − t0 )3
2!
3!
am+1 x0
1
a
+ ··· +
(t − t0 )m+1 + (t − t0 )2! + (t − t0 )3
(m + 1)!
2
3!
am−1
+ ··· +
(t − t0 )m+1 .
(m + 2)!
= x0 + ax0 (t − t0 ) +
By Theorem 1.15, {φm (t)} converges uniformly to the solution. From the expression above, we
know
1
1
φm (t) → − 2 (1 + a(t − t0 )) + (x0 + 2 )ea(t−t0 )
a
a
is the solution to the given initial-value problem.
Exercise 1.8.
The phase portraits for the initial-value problem ẋ = Ax, x(0) = x0 are shown in Figure 1.3. 9
Figure 1.3: Phase portraits for the system ẋ = Ax, x(0) = x0
Exercise 1.9.
The phase portraits for the van der Pol equation are shown in Figure 1.4.
Exercise 1.10
√
Let f (t, x, u) = −k1 k2 x + k2 u(t). Then
∆ẋ =
=
∂f (t, x, u)
∂f (t, x, u)
∆x +
√
√
∂x
∂u
2 x0 (t)=2 k−k1 k2 t
k1 k2
− √ √
∆x + k2 ∆u
2 x 2 x0 (t)=2√k−k1 k2 t
k1 k2
= − √
∆x + k2 ∆u .
2 k − k1 k2 t
∆u
u0 =0
10
CHAPTER 1.
Figure 1.4: Phase portraits for the van der Pol equation
√
To linearize the output equation, let g(x) = k1 x. Then
∆y
=
∂g(x)
∆x
∂x 2√x0 (t)=2√k−k1 k2 t
=
k
√ 1
∆x
2 k − k1 k2 t
Exercise 1.11.
The phase portraits for for the hard, linear, and soft spring models are shown in Figure 1.5. Exercise 1.12
(a)
= x21 + x22 + x2 cos x1
= (1 + x1 )x1 + (1 + x2 )x2 + x1 sin x2
ẋ1
ẋ2
It is easily verified that xT = (0, 0) is a solution of the above system of equations.
Let f1 (x1 , x2 ) = x21 + x22 + x2 cos x1 and f2 (x1 , x2 ) = (1 + x1 )x1 + (1 + x2 )x2 + x1 sin x2 . Then
∆ẋ1
∆ẋ2
"
=
∂f1
∂x1
∂f2
∂x1
∂f1
∂x2
∂f2
∂x2
#
x1 =x2 =0
∆x1
∆x2
2x1 − x2 sin x1
2x2 + cos x1
=
1 + 2x1 + sin x2 1 + 2x2 + x1 cos x2
0 1
∆x1
=
1 1
∆x2
x1 =x2 =0
∆x1
∆x2
(b) Let x1 = x, x2 = ẋ. Then we have
ẋ1
ẋ2
=
x2
−3x2 − x32 − (1 + x1 + x21 )u
=
f1 (x1 , x2 )
f2 (x1 , x2 )
We see that x = x1 = 0, ẋ = x2 = 0, and u = 0 is a solution of the above system. Hence, we
11
Figure 1.5: Phase portraits for the hard, linear, and soft spring models, (k = 10 and a = 0.5)
linearize about (x1 (t) = 0, x2 (t) = 0, u(t) = 0).
∆ẋ1
∆ẋ2
"
=
∂f1 (x,u)
∂x1
∂f2 (x,u)
∂x1
#
∂f1 (x,u)
∂x2
∂f2 (x,u)
∂x2
x1 =x2 =0
∆x1
∆x2
"
+
∂f1 (x,u)
∂u
∂f2 (x,u)
∂u
u=0
=
0
−u − 2ux1
∆u
x1 =x2 =0
u=0
1
−3 − 3x22
#
x1 =x2 =0
∆x1
∆x2
+
0
−(1 + x1 + x21 )
∆u
x1 =0
u=0
=
0 1
0 −3
∆x1
∆x2
0
−1
+
∆u
(c) The Kirchhoff laws for the circuit are
vi = iR + vR = i + vR
i = ic + iR = C
dvR
3
3
+ 1.5vR
= v̇R + 1.5vR
dt
from which we obtain the following differential equation
3
v̇R + 1.5vR
+ vR = vi
When vi = 14, we can find a steady state solution for the circuit by considering v̇R = 0, and thus
3
1.5vR
+ vR = 14, from which we obtain vR = 2. Hence, we can linearize the differential equation of
12
CHAPTER 1.
3
the system around the solution (vR = 2, vi = 14). Since v̇R = f (vR , vi ) = −1.5vR
− vR + vi we have
∆v̇R
=
∂f
∂vR
vR =2
vi =14
∆vR +
∂f
∂vi
vR =2
∆vi
vi =14
2
(−4.5vR
=
− 1)vR =2 ∆vR + (1)∆vi
= −19∆vR + ∆vi
Exercise 1.13.
Defining x1 = φ, x2 = φ̇, x3 = s, x4 = ṡ, we have the following equations
ẋ1
ẋ2
ẋ3
ẋ4
= x2
g
1
sin x1 − 0 (−F x4 + µ(t)) cos x1
=
L0
LM
= x4
µ(t)
F
= − x4 +
M
M
We see that xi = 0, i = 1, . . . , 4, µ(t) = 0 is a solution. Linearizing the system about this solution,
we get


 
 

0 1 0
0
0
∆ẋ1
∆x1
1
F
 ∆ẋ2   g0 0 0

 

L0 M   ∆x2  +  − L0 M  ∆µ.

  L
 ∆ẋ3  =  0 0 0





∆x3
1
0
1
F
∆ẋ4
∆x4
0 0 0 −M
M
Exercise 1.14.
The states for the simple pendulun are shown in Figure 1.6 and 1.7, where the dashed line
corresponds to the linearized system, and the solid to the nonlinear one. When θ0 = π/18 and
θ0 = π/12 (Figure 1.6) the states are similar for the nonlinear and the linearized model. The system
was linearized about the solution x = [0, 0]T which is close to the initial condition. In the case when
θ0 = π/6 and θ0 = π/3 the linearized system is not a good approximation of the nonlinear one. This
can be observed from the evolution of the states in Figure 1.7).
13
Figure 1.6: (i) θ0 = π/18, (ii) θ0 = π/12
Figure 1.7: (i) θ0 = π/6, (ii) θ0 = π/3
14
CHAPTER 1.
Chapter 2
Exercise 2.1.
(a) In Problem 1.2(a), let x1 = y1 , x2 = ẏ1 , x3 = y2 , and x4 = ẏ2 . Also, let u1 = f1 and u2 = f2 ,
and let (v1 , v2 ) = v T denote the output vector with v1 = y1 = x1 and v2 = y2 = x3 . From Problem
1.2(a) we obtain





0
1
0
0
x1
ẋ1
(B
+B)
(K
+K)


K
B
1
1
 ẋ2 
x2 

 −[ M1 ] −[ M1 ]
M1
M1





 ẋ3  = 
0
0
0
1
 x3 

B
K
2)
2)
x4
ẋ4
−[ (K+K
] −[ (B+B
M2
M2
M2
M2 ]


0
0
 M1
 u1 (t)
0
1

+
 0
0  u2 (t)
0 − M12


x1
 x2 
 + B u1 (t)
= A
 x3 
u2 (t)
x4
and

v1
v2
=
1
0
0
0
0
1
0
0


x1
 x2 




 x3  = C 
x4
(b) Same as in item (a), except consider uT = (u1 , u2 ) =
(8y1 + 10y2 ) as the (scalar-valued) system output. Then



 
0
0
ẋ1
x1
 ẋ2 
 x2   M1
0



  1
 ẋ3  = A  x3  +  0
0
ẋ4
x4
0 − M12

 
0
0
x1
 x2   M1
0
  1
= A
 x3  +  0
0
x4
0 − M52


x1
 x2 
 + B f1 (t)
= A
 x3 
f2 (t)
x4
15

x1
x2 
.
x3 
x4
(f1 , 5f2 )T as the system input and




u1 (t)
u2 (t)
f1 (t)
f2 (t)




16
CHAPTER 2.
and



x1
 x2 



y = [8, 0, 10, 0] 
 x3  = C 
x4

x1
x2 
.
x3 
x4
(c)
Z
y(t)
t
=
[CΦ(t, τ )B + Dδ(t, τ )]u(τ )dτ
0
Z
t
CΦ(t, τ )Bu(τ )dτ

ϕ11 (t − τ ) ϕ12 (t − τ )
Z t
1 0 0 0 
 ϕ21 (t − τ ) ϕ22 (t − τ )
=
0 0 1 0  ϕ31 (t − τ ) ϕ32 (t − τ )
0
ϕ41 (t − τ ) ϕ42 (t − τ )


0
0
 M1
 u1 (τ )
0
 1

dτ
 0
0  u2 (τ )
0 − M12
Z t 1
ϕ12 (t − τ ) − M12 ϕ14 (t − τ )
f1 (τ )
M
1
=
1
1
f2 (τ )
ϕ
(t
−
τ
)
−
ϕ
(t
−
τ
)
32
34
0
M1
M2
=
0
ϕ13 (t − τ )
ϕ23 (t − τ )
ϕ33 (t − τ )
ϕ43 (t − τ )

ϕ14 (t − τ )
ϕ24 (t − τ ) 

ϕ34 (t − τ ) 
ϕ44 (t − τ )
dτ
where
Φ(t) = eAt .
Therefore
H(t, τ ) =
1
M1 ϕ12 (t
1
M1 ϕ32 (t
− τ ) − M12 ϕ14 (t − τ )
− τ ) − M12 ϕ34 (t − τ )
.
(d)



0
0
ϕ11 ϕ12 ϕ13 ϕ14
t
 ϕ21 ϕ22 ϕ23 ϕ24   1
0 
f1 (τ )
M



1

y(t) =
[8, 0, 10, 0] 
dτ
ϕ31 ϕ32 ϕ33 ϕ34   0
0  f2 (τ )
0
5
ϕ41 ϕ42 ϕ43 ϕ44
0 − M2
Z t
5
1
f1 (τ )
){8ϕ12 (t − τ ) + 10ϕ32 (t − τ )} (−
){8ϕ14 (t − τ ) + 10ϕ34 (t − τ )}
=
[(
dτ
f2 (τ )
M1
M2
0
Z
H(t, τ ) = H(t − τ ) = [(
1
5
){8ϕ12 (t − τ ) + 10ϕ32 (t − τ )} (−
){8ϕ14 (t − τ ) + 10ϕ34 (t − τ )}].
M1
M2
17
Exercise 2.2
(a)

 
0
1
ẋ1
 ẋ2  =  0 − B
J
Θ
ẋ3
0 −K
La
0
KT
J
a
−R
La


 
0
x1
  x2  +  0 
ea
x3
La
A state representation of this system will be
ẋ = Ax + Bu, y = C T x
where

0
1
B

0
−
A=
J
Θ
0 −K
La

0
KT
J
a
−R
La


 
0
1
 , B =  0  , C =  0  , u = ea
ea
0
La
(b) An input–output description of the system is given by the transfer function
= C T (sI − A)−1 B
H(s)
−KT
=
s3 +
J
Ra
+ La s2 + B
J +
B
J
Ra
La
s
Exercise 2.3
By (2.46a)–(2.46b), we can represent Figure 2.1 in the textbook as

 x1 (k + 1) = x2 (k)
x2 (k + 1) = ax1 (k) + bx2 (k) + u(k), k = 0, 1, 2, . . .

y(k)
= x1 (k)
Clearly, for the above time-invariant system
0 1
0
A=
, B=
, C = [1, 0], D = 0 and k0 = 0.
a b
1
Therefore
Φ(k, 0) = Ak , for k = 0, 1, 2, . . .
Now, by (2.42)–(2.43), we have
y(k) = [1, 0]Ak
x1 (0)
x2 (0)
+
k−1
X
[1, 0]Ak−j−1
j=0
0
1
0
1
u(j), k > 0
y(0) = x1 (0)
i.e.
k
x1 (k) = [1, 0]A
x1 (0)
x2 (0)
+
k−1
X
j=0
k−j−1
[1, 0]A
u(j), k > 0
18
CHAPTER 2.
Exercise 2.4.
(a) An input–output description of the system is given by
T (vi (t))
T (−vi (t))
= v0 (t) =
R2
R1 +R2 vi (t),
vi (t) > 0
vi (t), vi (t) ≤ 0
R2
R1 +R2 [−vi (t)],
−vi (t) > 0
−vi (t), −vi (t) ≤ 0
2
vi (t), vi (t) < 0
− R1R+R
2
−vi (t), vi (t) ≥ 0
=
=
⇒ −1 · T (vi (t)) 6= −T (vi (t)) ⇒ not linear.
(b)
T [vi (t − t0 )] =
R2
R1 +R2 vi (t
− t0 ), vi (t) > 0
vi (t − t0 ), vi (t) ≤ 0
⇒ an input vi (t − t0 ) results in an output vo (t − t0 ).
⇒ time-invariant.
(c) vo (t) does not depend on future values of vi (t), ⇒ causal.
Exercise 2.5.
It is causal since y(t) does not depend on future values of u(t). It is linear since
Tτ [a1 u1 (t) + a2 u2 (t)]
=
a1 u1 (t) + a2 u2 (t) t ≤ τ
0
t>τ
= a1 Tτ [u1 (t)] + a2 Tτ [u2 (t)].
It is not time-invariant since
Tτ [u(t − t0 )]
u(t − t0 ) t ≤ τ
0
t>τ
u(t − t0 ) t − t0 ≤ τ
0
t − t0 > τ
=
6=
y(t − t0 )
=
Exercise 2.6.
Noncausal, linear, time-invariant.
Exercise 2.7.
Not linear.
Exercise 2.8.
Noncausal, time-invariant.
Exercise 2.9.
Noncausal, non-linear (affine).
19
Exercise 2.10.
Using the expression (2.54) for the input u(n) of the system, we have
u(n)
∞
X
=
u(k)δ(n − k)
k=−∞
∞
X
=
u(k)[p(n − k) − p(n − k − 1)]
k=−∞
Let h(n, k) be the unit pulse response of the system. Then
y(n)
=
=
=
=
∞
X
h(n, k)u(k)
k=−∞
∞
X
h(n, k)[
k=−∞
∞
X
u(l)
l=−∞
∞
X
∞
X
u(l)[p(k − l) − p(k − l − 1)]]
l=−∞
∞
X
h(n, k)[p(k − l) − p(k − l − 1)]
k=−∞
u(l)[s(n, l) − s(n, l − 1)]
k=−∞
where
s(n, l) =
∞
X
h(n, k)p(k − l)
l=−∞
is the unit step response of the system.
20
CHAPTER 2.
Chapter 3
Exercise 3.1.
(a) In view of

 




0
1
1
1
v =  4  = α1  −1  + α2  0  + α3  1 
0
−1
0
0

(α1 , α2 , α3 ) = (1, 0, 5) is the unique representation of v with respect to the given basis.
(b) Similarly





 

1
1
0
s+2
v̄ =  1/s  = ᾱ1  −1  + ᾱ2  0  + ᾱ3  1 
0
−1
0
−2
implies that (ᾱ1 , ᾱ2 , ᾱ3 ) = (s, 2, s + 1s ) is the unique representation of v̄ with respect to the given
basis. Note that ᾱ1 = s/1, ᾱ2 = 2/1 and ᾱ3 = (1 + s2 )/s are scalars from the field of rational
functions.
Exercise 3.2.
Let B = [v 1 , v 2 , v 3 ], B̄ = [v̄ 1 , v̄ 2 , v̄ 3 ]. From



1 0
2 1 1
B =  1 0 0  = B̄P =  0 1
0 −1
0 −1 0
Also


2
0
1  P ⇒ P =  21
1
1
2



1
0
e2 =  1  = Ba = B̄ā =⇒ a =  0  ,
0
−2

1
1
2
− 21

1
0 
0

0

ā = 
1
2
1
2
 (ā = P a)
Exercise 3.3.
It can be easily verified that the set of all vectors (x, αx)T , α, x ∈ R with α fixed, together with
the usual vector
addition
and multiplication by a scalar satisfies all axioms and therefore it is a
1
vector space.
is a basis of this vector space since it is a linearly independent set of vectors
α
and spans the whole space; the latter can be seen from the fact that any vector (x, αx)T = x(1, α)T .
The dimension of this vector space is clearly one.
Note that (x, αx)T represents points in R2 on a straight line passing through the origin. If the
straight line does not pass through the origin, then its points do not constitute a vector space since
they do not contain the zero vector.
21
22
CHAPTER 3.
Exercise 3.4.
The set of all real n×n matrices with the usual definitions for matrix addition and multiplication
by a scalar satisfies all axioms and it is a vector space. It is of dimension n2 , since a choice for a
basis consists of n2 n × n matrices each with one in a different entry and zero everywhere else (they
are n2 linearly independent vectors of the vector space). Same results hold for the m × n matrices.
Nonsingular matrices do not constitute a vector space because closure of matrix addition is
violated. The sum of two nonsingular matrices is not necessarily nonsingular. Take for example
1 0
0 1
1 1
+
=
;
0 1
1 0
1 1
or A + (−A) = 0 where A is nonsingular.
Note that the set of all nonsingular matrices does not constitute a field either, since in general
matrices do not commute (AB 6= BA).
Exercise 3.5.
2 s
1
0
2
Since
+ (−s )
=
, these vectors are dependent over the field of rational
s
1/s
0
2
functions; here α1 = 1,
functions. But they are independent over the
α22 =
−s /1
which arerational
s
1
0
field of reals since α1
+ α2
=
implies α1 = α2 = 0 when α1 , α2 ∈ R.
2
1/s
0
Exercise 3.6.
(a) Rank is one over the field of complex numbers.
(b) Rank is two over the field of real numbers.
(c) Rank is two over the field of rational functions.
(d) Rank is one over the field of rational functions.
Exercise 3.7.
  

1

 0
(a) R(A1 ) = {1}, N (A1 ) =  1  ,  0  .


−1
0
Note that dim N (A1 ) = 3 − rank A1 = 3 − 1 = 2 and two linearly independent vectors (a basis)
in N (A1 ) are as above since


0 1
[1, 0, 1]  1 0  = [0, 0].
0 −1
(b) dim N (A2 ) = 2 − rank A2 = 0, that is N (A2 ) contains only the zero vector.
   
0 
 1
R(A2 ) =  0  ,  0  .


0
1
Exercise 3.8.
Direct proof:
eA1 t eA2 t
=
(I +
∞ k
X
t
k=1
= I+
k!
Ak1 )(I +
∞ k
X
k
X
k=1
j=0
t
k!
∞ k
X
t
k=1
k!
Ak2 )
k!
Aj Ak−j
j!(k − j)! 1 2
23
It can be shown by induction that A1 A2 = A2 A1 implies Ak1 Al2 = Al2 Ak1 for k, l nonnegative integers.
P∞ k
Pk
k
and e(A1 +A2 )t = I + k=1 tk! (A1 +A2 )k equals the expression
Then (A1 +A2 )k = j=0 (
)Aj1 Ak−j
2
j
for eA1 t eA2 t given above.
Proof based on system concepts: e(A1 +A2 )t is the state transition matrix of ẋ = (A1 + A2 )x and it is
the unique solution of Ẋ = (A1 + A2 )X, X(0) = I. Consider eA1 t eA2 t and note that
d A1 t A 2 t
(e e ) = A1 eA1 t eA2 t + eA1 t A2 eA2 t
dt
P∞ k
Now A1 A2 = A2 A1 implies eA1 t A2 = A2 eA1 t since now Ak1 A2 = A2 Ak1 and eA1 t = I + k=1 tk! Ak1 .
d
(eA1 t eA2 t ) = (A1 + A2 )eA1 t eA2 t , that is eA1 t eA2 t is a solution of Ẋ = (A1 + A2 )X;
Therefore, dt
furthermore, eA1 t eA2 t |t=0 = I. Therefore eA1 t eA2 t = e(A1 +A2 )t . Finally, note that here e(A1 +A2 )t =
eA1 t eA2 t = eA2 t eA1 t .
Exercise 3.9.
Assume there exists a similarity transformation matrix P such that P AP −1 = Ac . We shall
show that a b exists. Take b , P −1 (0, . . . , 0, 1)T = P −1 bc . Then
rank[b, Ab, . . . , An−1 b] = rank(P [b, Ab, . . . , An−1 b])
= rank[P b, P AP −1 P b, . . . , P An−1 P −1 P b]
= rank[bc , Ac bc , . . . , Acn−1 bc ]


0
0
···
1
 0
0
· · · −αn−1 



 ..
.
..
..
..
= rank  .
 = n.
.
.


 0
1
···
−α1 
1 −αn−1 · · ·
−α0
To show now that the converse is true, assume that there exists an n × 1 vector b such that
rank[b, Ab, · · · , An−1 b] = n. Let q be the nth row of [b, Ab, · · · , An−1 b]−1 and let



P =


qA
..
.
q
qA
..
.
qAn−1



.






Now P A = Ac P , because P A = 
 and the first n − 1 rows of Ac P are clearly equal to the
 qAn−1 
qAn
first n−1 rows of P A. The last row of Ac P is (−α0 , . . . , −αn−1 )P = q(−α0 I −α1 A · · ·−αn−1 An−1 ) =
qAn in view of the Cayley–Hamilton theorem.
Exercise 3.10.
Ac v = (λ, λ2 , . . . , λn−1 , −α0 − α1 λ − · · · − αn−1 λn−1 )T . The coefficients αj are of course the
coefficients of the characteristic polynomial of Ac and therefore Ac v = (λ, λ2 , . . . , λn−1 , λn )T = λv.
This implies that v = (1, λ, . . . , λn−1 )T is a corresponding eigenvector of the eigenvalue λ.
24
CHAPTER 3.
Exercise 3.11.
First note that (λki , v i ) is an (eigenvalue, eigenvector) pair of Ak . To see this, write Ak v i =
k−1
A
Av i = Ak−1 λi v i = λi Ak−2 Av i = λ2i Ak−2 v i = · · · = λki vi . Now
f (A)v i =
l
X
αk Ak v i = (
k=0
l
X
αk λki )v i = f (λi )v i ,
k=0
i
which shows that (f (λi ), v ) is an (eigenvalue, eigenvector) pair of f (A).
Exercise 3.12.
In view of Section A.5.2 and in particular (A.42) and (A.43), let f (A1 ) = A100
and g(A1 ) =
1
α0 I + α1 A1 + α2 A21 (f (λ) = λ100 and g(λ) = α0 + α1 λ + α2 λ2 ). The eigenvalues of A1 are λ1 = 0
and λ2 = 1 repreated twice; that is, m1 = 1, m2 = 2, p = 2. Then
f (λ1 ) = g(λ1 ) implies 0100 = α0
f (λ2 ) = g(λ2 ) implies 1100 = α0 + α1 + α2
f (1) (λ2 ) = g (1) (λ2 ) implies 100(1)99 = α1 + 2α2 .
Therefore, α0 = 0, α1 = −98, α2 = 99 and

A100
1
1
= −98A1 + 99A21 =  0
0
2
0
0

396
2 .
1
Now let f (A1 ) = eA1 t and g(A1 ) = α0 I + α1 A + α2 A2 (f (λ) = eλt , g(λ) = α0 + α1 λ + α2 λ2 ).
Then
f (λ1 ) = g(λ1 ) implies e0 = α0
f (λ2 ) = g(λ2 ) implies et = α0 + α1 + α2
(1)
f (λ2 ) = g (1) (λ2 ) implies tet = α1 + 2α2
from which
α0 = 0, α1 = 2et − tet − 2, α2 = tet − et + 1.
Therefore,
et
= α0 I + α1 A1 + α2 A21 =  0
0

eA1 t
The eigenvalues of A2 are λ1 = 0 repeated four
analogous to the above it can be shown that

1
 0
100
A2 t

A2 = 0 and e
=
0
0
2(et − 1)
1
0

4(tet − et + 1)
.
2(et − 1)
et
times; that is m1 = 4, p = 1. In a manner
t
1
0
0

t2 /2 t3 /6
t
t2 /2 

1
t 
0
1
Exercise 3.13.
Let z(t) = Φ(t0 , t) x(t) =⇒ x(t) = Φ(t, t0 ) z(t). Then
ẋ = A(t)x + B(t)u
⇒ Φ̇(t, t0 )z(t) + Φ(t, t0 )ż(t) = A(t)Φ(t, t0 )z(t) + B(t)u(t)
⇒
ż(t) = Φ(t0 , t)B(t)u(t)
Rt
⇒
z(t) = z(t0 ) + t0 Φ(t0 , τ )B(τ )u(τ )dτ
25
Since z(t) = Φ(t0 , t)x(t), we finally have
Z
t
x(t) = Φ(t, t0 )x(t0 ) +
Φ(t, τ )B(τ )u(τ )dτ.
t0
Note that this method is called the “Lagrange method of variation of constants” and this explains
the name of the above “variation of constants” formula.
Alternatively, one could use x(t) = xh (t) + xp (t) where xh (t) = Φ(t, t0 )α, α ∈ Rn are solutions of
the homogeneous equation ẋ = A(t)x, and xp (t) = Φ(t, t0 )β(t) is a particular solution of ẋ = A(t)x+
Rt
B(t)u. Substituting, one obtains Ḃ(t) = Q(t0 , t)B(t)u(t) and B(t) = B(t0 )+ t0 Φ(t0 , τ )B(τ )u(τ )dτ .
Rt
Note that α+β(t0 ) = x(t0 ). Therefore, x(t) = xh (t)+xp (t) = Φ(t, t0 )[x(t0 )+ t0 Φ(t0 , τ )B(τ )u(τ )dτ ],
from which the result is immediately determined.
Exercise 3.14.
Consider the identity Φ(t, τ )Φ(τ, t)(= Φ(t, t)) = I and take derivatives of both sides with respect
to τ
∂
∂
Φ(t, τ ) Φ(τ, t) + Φ(t, τ )
Φ(τ, t) = 0.
∂τ
∂τ
Note that
∂
∂τ Φ(τ, t)
= A(τ )Φ(τ, t). Substituting we obtain
∂
Φ(t, τ )
∂τ
= −Φ(t, τ )A(τ )φ(τ, t)[Φ(τ, t)]−1
= −Φ(t, τ )A(τ ).
Exercise 3.15.
The solution uses the results of Exercise 3.14. The state transition matrix Φa (t, t0 ) of the adjoint
equation is the unique solution of the matrix equation ẋa = −A(t)T xa , x(t0 ) = I. Indeed,
Φ̇a (t, t0 )
∂
Φ(t0 , t))T = (−Φ(t0 , t)A(t))T
∂t
= −A(t)T Φ(t0 , t)T = −A(t)T Φa (t, t0 )
=
(
where the equation in Exercise 3.14 was used. Furthermore Φa (t0 , t0 ) = [Φ(t0 , t0 )]T = I.
It is of interest to note the following property: If x(t) and z(t) denote the solutions of ẋ =
A(t)x, x(t0 ) = xa and its adjoint equation ż = −A(t)T z, z(t0 ) = za then
< x(t), z(t) > =
=
=
=
xT (t)z(t)(= z T (t)x(t))
(Φ(t, t0 )x(t0 ))T (Φ(t0 , t)T z(t0 ))
x(t0 )T Φ(t, t0 )T Φ(t0 , t)T z(t0 )
x(t0 )T z(t0 )
since Φ(t, t0 )T Φ(t0 , t)T = (Φ(t0 , t)Φ(t, t0 ))T = Φ(t0 , t0 )T = I(= Φ(t, t0 )T Φa (t, t0 )).
< x(t), y(t) > is independent of time.
Also note that
That is,
Φa (t, t0 )ΦT (t, t0 ) = ΦT (t, t0 )Φa (t, t0 ) = ΦTa (t, t0 )Φ(t, t0 ) = I
26
CHAPTER 3.
Exercise 3.16.
(a) It has been shown that
H(t, τ ) = C(t)Φ(t, τ )B(τ ), t ≥ τ,
H(t, τ ) = 0, t < τ.
Similarly,
Ha (t, τ ) = B(t)T Φa (t, τ )C(τ )T = B(t)T Φ(τ, t)T C(τ )T = [C(τ )Φ(τ, t)B(t)]T , t ≥ τ,
Ha (t, τ ) = 0, t < τ.
Then,
Ha (τ, t) = [C(t)Φ(t, τ )B(τ )]T = H(t, τ )T
(b) Ha (s) = B T (sI + AT )−1 C T from which HaT (s) = C(sI + A)−1 B and HaT (−s) =
−C(sI − A)−1 B = −H(s).
Exercise 3.17.
eAt
 1
4
4
 s−1
s−2 − s−1
1
−1
−1
−1

0
= L [(sI − A) ] = L
s−2

0
0
 t

2t
t
2t
t
e
4e − 4e
10e − 10e

e2t
0
=  0
2t
0
0
e
10
s−2
−
0
10
s−1
1
s−2




Alternatively, using the Cayley–Hamilton method, eAt = α0 I + α1 A + α2 A2 with α0 = 2te2t + 4et −
3e2t , α1 = 4e2t − 3te2t − 4et , α2 = te2t − e2t + et .
Exercise 3.18.
(a) Most convenient here is to use Laplace transform.
eAt


 s − 12
−1
 0
= L


0

1

 s− 12

= L−1  0


0
−1 

1
0

s+1
0 


0
s+2
1
− (s− 1 )(s+1)
2
0
0
0
1
s+2
1
s+1
 
t

e2


 = 0


0
2 −t
3e
t
− 23 e 2
e−t
0
0
0
−2t

.
e
(b) Easy to do with appropriate software. For both cases limt→∞ x2 (t) = limt→∞ x3 (t) = 0. In
the first case limt→∞ x1 (t) = +∞, whereas in the second case limt→∞ x1 (t) = 0. Differences in x1 (t)
are due to initial conditions.
Exercise 3.19.
It is easily shown using eAt = L−1 ((sI − A)−1 ). Alternatively, one can verify that Φ(t, 0) = eAt
∂
is the solution of ∂t
Φ(t, 0) = AΦ(t, 0), Φ(0, 0) = I.
27
Exercise 3.20.
−t
e
At
Here e =
0
0
et
and by (3.62) the solution is given by
φ(t)
Z t
= eAt x0 +
eA(t−s) Bu(s)ds
0
−t −t
Z t s ae
e
e
0
=
+
ds
−s
bet
e
0 et
0
1 + e−t (a − 1)
=
−1 + et (b + 1)
where x0 = [a, b]T . When x0 = [1, 0]T , φ(t) = [1, 1 − et ]T and φ(t) → (1, ∞) as t → ∞. When
x0 = [0, 1]T , φ(t) = [1 − e−t , 1]T and φ(t) → (1, 1) as t → ∞.
Exercise 3.21.
In view of Exercise 3.19, we have
cos t sin t
cos t + sin t
eAt =
⇒ φ(t) = eAt x(0) =
− sin t cos t
cos t − sin t
√
√
Since x21 + x22 = 2 = ( 2)2 , the trajectory in the x1 − x2 (phase) plane is a circle with radius 2. Exercise 3.22.
et + e−t et − e−t
Here e = L ((sI − A) ) =
. The solution φ(t) = eAt x(0). For
et − e−t et + e−t
x(0) = [1, 1]T , φ(t) = [et , et ]T . Note that x(0) is co-linear with the eigenvector v1 corresponding
to the eigenvalue λ1 = 1 and therefore eλ1 t = et is the only mode that appears in the solution
[see (3.58)]. Similarly, for x(0) = α[1, −1]T , φ(t) = α[e−t , −e−t ]T . Here x(0) is co-linear with the
eigenvector v2 corresponding to λ2 = −1. In the first case φ(t) → ∞ as t → ∞, while in the second
case φ(t) → 0 as t → ∞.
At
−1
−1
1
2
Exercise 3.23.
Pn
Pn
t=0
(a) eAt = i=1 Ai eλi t =⇒ I = i=1 Ai
(b) AAi = A(vi ṽi ) = (Avi )ṽi = λi vi ṽi = λi Ai
(c) Ai A = (vi ṽi )A = vi (ṽi A) = vi λi ṽi = λi Ai
(d) In view of
1 i=j
ṽi vj =
= δij
0 i=
6 j
Ai Aj = vi ṽi vj ṽj = vi δij ṽj = δij Ai
Exercise 3.24.
(a) The eigenvalues are λ1 = 0 (repeated twice, n1 = 2), λ2 = i, λ3 = −i. For λ1 we have
1
1
(λ1 I − A) v11
= 0 =⇒ v11
= [0, 0, 1, 0]T
2
0
2
(λ1 I − A) v11
= −v11
=⇒ v11
= [− 31 , 0, 0, 1]T
For λ2 = i, λ3 = −i, we have the following eigenvectors, respectively
λ2 −→ v2 = [1, i, 2i, −2]T
λ3 −→ v3 = [1, −i, −2i, −2]T
(= v2∗ ).
28
CHAPTER 3.
i) To determine the Jordan form,
− 23
0
0
1

P −1
0
 0
1
2
= Q = [v11 , v11 , v2 , v3 ] = 
 1
0

1
−i 
.
−2i 
−2
1
i
2i
−2
and

b = P AP −1
A
b
C
0
 0
=
 0
0
1
0
0
0
0
0
i
0


−2
0
 0
0 
b = PB =  i
, B
 −
0 
2
i
−i
2

0
−3 

−1 
−1
2
= CP −1 = [1, − , 1 + 2i, 1 − 2i]
3
ii) If b = [1, 0, 0, 0]T then since rank(C) = rank[b, Ab, A2 b, A3 b] = 4, we compute C −1 to find
q = [0, 0, − 16 , 0] and
 


0
0
− 16
0
q
 qA   0
0
0
− 16 
=
.
P =
1
2
 qA   0
0
0 
3
2
qA3
1
0
0
3
Then

Ac
Cc
= P AP −1
0
 0
=
 0
0
1 0
0 1
0 0
0 −1
[−6 4 0 1]
h
(b) H(s) = C(sI − A)−1 B = s(ss−2
2 +1) ,

0
0 
,
1 
0

0
 0
Bc = P B = 
 1
3
0

0
− 61 

0 
2
3
=
s2 +2s−3
s2 (s2 +1)
i
.
Exercise 3.25.
(a)
e−t

0
=
0

eAt
y(t)
te−t
e−t
0

0
0  =⇒
e2t
= CeAt x(0)
= [e−t , e−t (t + 1), e2t ] x(0).
For x(0) = [α, β, γ]T =⇒ y(t) = (α + β)e−t + βte−t + γe2t = te−t =⇒ α = −β, β = 1,
γ = 0 =⇒ x(0) = [−1, 1, 0]T .
(b)
ŷ(s)
= C(sI − A)−1 x(0)
=
=
=


α
1
s+2
1 
[
,
,
] β 
s + 1 (s + 1)2 s − 2
γ
(s + 1)(s − 2)α + (s + 2)(s − 2)β + (s + 1)2 γ
(s + 1)2 (s − 2)
s2 (α + β + γ) + s(−α + 2γ) + (−2α − 4β + γ)
(s + 1)2 (s − 2)
29
which characterizes all ŷ(s) = L(y(t)) achievable via x(0) = [α, β, γ]T .
Exercise 3.26.
(a) For û(s) =
1
s+4 ,
x(0) = [x1 , x2 ]T , we have
ŷ(s)
=⇒ x(0)
(b) For û(s) =
1
s−α ,
ŷ(s)
=⇒ x(0)
= C(sI − A)−1 B û(s) + C(sI − A)−1 x(0)
1
1
(s − 3)
x1 s + x2
1
25
=
+ 2
=k
− 25
1
1
2
s+4 s +s+ 2
s+4
s +s+ 2
= [1/25, −3/25]T , k = 1/25.
x(0) = [x1 , x2 ]T , we have
1
s+α+1
1
x1 s + x2
1
−
=k
+ 2
1
1
2
2
2α + 2α + 1 s − α s + s + 2
s−α
s +s+ 2
T
1
α+1
1
=
,
, k=
.
2
2
2
2α + 2α + 1 2α + 2α + 1
2α + 2α + 1
=
Need to have α2 + α +
1
2
6= 0.
Exercise 3.27.
(a) We first calculate eAt using any of the methods described in Section 3.3.2. (Note that the
eigenvalues of A are λ1 = 0.5907 + 0.7069j, λ2 = 0.5907 − 0.7069j, λ3 = 3.1883, λ4 = −0.3696). For
x(0) = [1, 1, 1, 1]T and u(t) = [1, 1]T , t ≥ 0, we have
At
Z
φ(t, 0, x(0)) = e x0 +
t
eA(t−τ ) Bu(τ )dτ
0
(0.7214 cos(0.7069t) − 1.8684 sin(0.7069t))e0.5907t
 (3.9686 cos(0.7069t) + 5.4342 sin(0.7069t))e0.5907t
=
 (−0.8948 cos(0.7069t) − 1.6136 sin(0.7069t))e0.5907t
(1.3372 cos(0.7069t) + 1.1688 sin(0.7069t))e0.5907t


+0.5615e3.1883t + 0.2829e−0.3696t
−1.2771e3.1883t + 0.3085e−0.3696t − 2 


+1.7903e3.1883t + 0.1045e−0.3696t
−0.3854e3.1883t + 1.0482e−0.3696t − 1
and
y1 (t)
=
=
y2 (t) =
=
φ1 (t, 0, x(0))
(0.7214 cos(0.7069t) − 1.8684 sin(0.7069t))e0.5907t + 0.5615e3.1883t + 0.2829e−0.3696t
φ4 (t, 0, x(0))
(1.3372 cos(0.7069t) + 1.1688 sin(0.7069t))e0.5907t − 0.3854e3.1883t + 1.0482e−0.3696t − 1
(b) The transfer function matrix
H(s)
= C(sI − A)−1 B
1
−s
=
4
3
2
1
−s
s − 4s + 3s − s − 1
s
s − s2
.
30
CHAPTER 3.
Exercise 3.28.
(a) (i) We can easily show that
1 2k
k
A =
=⇒ CAk B = 6k + 5
0 1
y(k)
=
k−1
X
CAk−(i+1) Bu(i) =
i=0
=
k−1
X
6
CAi B
i=0
(6i + 5) =
i=0
=
k−1
X
k−1
X
6i + 5k
i=0
k(k − 1)
+ 5k = 3k 2 − 3k + 5k = 3k 2 + 2k, k ≥ 1.
2
Since y(0) = Cx(0) = 0, we easily see that y(k) = 3k 2 + 2k, k ≥ 0.
(ii)
= C(zI − A)−1 B û(z)
6
5z
6z
z
5
+
=
+
=
2
2
z − 1 (z − 1)
z−1
(z − 1)
(z − 1)3
k(k − 1)
p(k)
=⇒ y(k) = 5k p(k) + 6
2
= (5k + 3k 2 − 3k) p(k) = (3k 2 + 2k) p(k)
ŷ(z)
as before. Easy to plot.
(b)
y(0) = Cx(0) = C[x1 , x2 ]T =
x1 + x2 = 1
=⇒ x(0) = [1, 0]T .
y(1) = Cx(1) = CA[x1 , x2 ]T = x1 + 3x2 = 1
C
Note that the rank of the observability matrix O =
is two and so the system is observable;
CA
see Chapter 5.
Exericse 3.29.
(a) The response to a unit pulse input û(z) = 1 is
y(k)
= Z
−1
[H(z)û(z)] = Z
−1
1
z −1
−1
[
]=Z
z + 0.5
1 + 0.5z −1
(−0.5)k−1 p(k − 1)
(− 21 )k−1 , k = 1, 2, 3, . . .
=
0,
k=0
=
Easy to plot.
z
(b) The response to a unit step input û(z) = z−1
is
1
z
y(k) = Z −1
z + 0.5 z − 1
1 −1
1
2
1
=
Z
+ Z −1
3
z + 0.5
3
z−1
1 1 k−1
2
=
(− )
p(k − 1) + p(k − 1)
3 2
3
0,
k=0
=
1
1 k−1
+ 23 , k = 1, 2, 3, . . .
3 (− 2 )
31
Easy to plot.
(c) (i) Using convolution, we have
y(k) = Z −1 (H(z)) ∗ Z −1 (û(z)).
Since
Z −1 (H(z))
=
Z −1 (û(z))
=
(Note:
0,
(− 12 )k−1 ,
k=0
k = 1, 2, 3, . . .
1, k = 1, 2
0, elsewhere
∞
X
f (k) ∗ g(k) ,
f (l)g(k − l))
i=0
and we obtain for k = 0, 1, 2, 3, 4 y(k) = {0, 0, 1, 21 , − 14 }.
(ii) Using z-transform, we have û(z) = z −1 + z −2 = z+1
z 2 and
y(k)
= Z −1 (H(z)û(z))
1+z
−1
= Z
z 2 (z + 0.5)
1
−1 −2z + 2
−1
= Z
+ 2Z
z2
z + 0.5
= −2δ(k − 1) + 2δ(k − 2) + 2(−0.5)k−1 p(k − 1)
which again gives y(k) = {0, 0, 1, 21 , − 14 } for k = 0, 1, 2, 3, 4.
(d) In view of (c)(ii) limk→∞ y(k) = 0. Alternatively, from the final-value theorem
lim y(k)
k→∞
=
=
lim (1 − z −1 )ŷ(z)
z→1
(1 − z −1 )(z −1 + z −2 )z −1
= 0.
z→1
1 + 0.5z −1
lim
Exercise 3.30.
Since x(k) = x(0), we have x(0) = Ax(0) + Bu(k), k ≥ 0 =⇒ (I − A)x(0) = Bu(0), k ≥ 0. In
order to have a solution, we need rank(B) = rank[B, (I − A)x(0)]. Then u(k) = u(0) is determined
from the relation above. 1
1 2
2
1
For our example, rank
= rank
and (I−A)x(0) = Bu(0) =⇒
=
u(0)
1
1 2
2
1
=⇒ u(k) = u(0) = 2, k ≥ 0.
Exercise 3.31.
(a) Necessity: Since x(k) = Ak x(0), in order to have x(k) = 0, for k ≥ n, we need Ak = 0, k ≥ n.
Ak =0
vi 6=0
We know that Avi = λi vi =⇒ Ak vi = λki vi =⇒ λki vi = 0 =⇒ λki = 0 =⇒ λi = 0. Therefore, the
eigenvalues have to be at the origin.
Sufficiency: A = P −1 JP , when J is the Jordan Canonical
Form. Let J = diag(J
 i ) and Je =
k
k−(t−1)
k−1
k
···
λi
 λi kλi

t−1


k

 0
k
λ
·
·
·
i
diag(Jij ). For the (t × t) matrix Jij = 
, we see that if


..


.
k
0
0
···
λi
32
CHAPTER 3.
k
k
λi = 0, then Jij
= 0, k ≥ t. Hence, if tmax is the maximun dimension of all Jij blocks, then Jij
= 0,
k
k
k ≥ tmax =⇒ J = 0, k ≥ tmax =⇒ A = 0, k ≥ tmax .


0 1 0
(b) A1 =  0 0 1  is already in Jordan Canonical Form with λi = 0, i = 1, 2, 3, and tmax = 3.
0 0 0
Therefore, we should have A31 = 0. This can be verified by direct calculation.


0 1 0
A2 =  0 0 0  is already in Jordan Canonical Form, with tmax = 2. Therefore, we should
0 0 0
have A22 = 0. This can be verified by direct calculation.


0 0 0
A3 =  0 0 1  is already in Jordan Canonical Form, with tmax = 2. Therefore, A23 = 0, as
0 0 0
can easily be verified.
Exercise 3.32.
(a)
4
2
2(s + 2)
= − 2
s(s2 + 2s + 2)
s s + 2s + 2
=⇒ y(t) = 2 p(t) − 2e−t (cos t + sin t) p(t)
= 2 − 2e−t (cos t + sin t), t ≥ 0.
ŷ(s)
= H(s)û(s) =
Easy to plot.
0
1
0
(b) (i) A =
,B=
, C = [4, 0].
−2 −2
1
(ii)
−t
e (cos t + sin t)
e−t sin t
= L−1 {(sI − A)−1 } =
−2e−t sin t
e−t (cos t − sin t)
−T
e (cos T + sin T )
e−T sin T )
=⇒ Ā = eAT =
−2e−T sin T
e−T (cos T − sin T )
Z T
1/2 − 1/2e−T (cos T + sin T )
B̄ =
eAτ Bdτ =
e−T sin T
0
eAt
C̄
= C = [4, 0].
Pk−1
(c) y(k) = i=0 C̄Ak−(i+1) B̄u(i). Doing so, for this specific example (u(k) = 1, k ≥ 0), we get
y(k) = y(t)|t=kT , for any values of T . In general, the smaller the T , the better the approximation.
(See also solution and plots of Exercise 3.36).
(d)
H̄(z)
= C̄(zI − Ā)−1 B̄
[2 − 2e−T (sin T + cos T )] z + 2e−2T − 2e−T (cos T − sin T )
=
z 2 − 2ze−T cos T + e−2T
33
We have
H(s)
−1
M (k) = L
= 2 − 2e−kT (cos kT + sin kT )
s
t=kT
2z
2z(z − e−T cos T + e−T sin T )
Z[M (k)] =
−
z−1
z 2 − 2ze−T cos T + e−2T
z−1
2(z − 1)(z − e−T cos T + e−T sin T )
=⇒
Z[M (k)] = 2 −
z
z 2 − 2ze−T cos T + e−2T
= H̄(z).
=⇒
Exercise 3.33.
(a) With (A, B, C, D) as given, we have
ẋ1 = x2 =⇒ sx̂1 = x̂2
..
.
ẋn−1 = xn =⇒ x̂n = sx̂n−1 = sn−1 x̂1
ẋn = −a0 x1 − · · · − an−1 xn + u =⇒ sx̂n = sn x̂1 = −a0 x̂1 − sa1 x̂1 − · · · − sn−1 an−1 x̂1 + û
=⇒ x̂1 (sn + an−1 sn−1 + · · · + a0 ) = û.
Also
ŷ
ŷ
=⇒
û
= b0 x̂1 + b1 x̂2 + · · · + bn−1 x̂n = x̂1 (b0 + b1 s + · · · + bn−1 sn−1 )
(bn−1 sn−1 + · · · + b1 s + b0 )
=
.
(sn + · · · + a1 s + a0 )
See also Lemma 8.19 in Chapter 8. Since H(s) is scalar, H(s) = H T (s) and
e
e −1 B
e + D = B T (sI − AT )−1 C T + D = [C(sI − A)−1 B + D]T = H T (s) = H(s)
C(sI
− A)
e B,
e C,
e D) is another realization of H(s).
Therefore, (A,
0 1
0
(b) (i) A =
,B=
, C = [1, 0], D = 0.
0 0
1
0
1
0
(ii) A =
,B=
, C = [wn2 , 0], D = 0.
−wn2 −2ζwn
1
0 1
0
4s
(iii) H(s) = 1 + s2 −2s+1 =⇒ A =
,B=
, C = [0, 4], D = 1.
−1 2
1
Exercise 3.34.
(a) H(t, 0) = L−1 (H(s)).
(b) We know that for t ≥ 0,
H(t, 0)
= Dδ(t) + CeAt B
∞ j
X
t j
= Dδ(t) + C(
A )B
j!
j=0
Then H(s) = L(H(t, 0) = D + CB 1s + CAB s12 + · · · = D +
(c) Follow the Hint.
P∞
k=1 [CA
k−1
B]s−k .
34
CHAPTER 3.
Exercise 3.35.
(a) The proof of this result can be found in any undergraduate textbook on signals and systems.
Here is an outline of the proof:
Let the impulse response be h(t) = L−1 (H(s)) and let ũ(t) = kej(w0 t+φ) . Assuming zero initial
conditions (their effect on the output will die out at steady state since the system is BIBO stable)
we obtain
t
Z
ỹ(t)
h(τ )kej(w0 (t−τ )+φ) dτ
=
0
=
Z t
[
h(τ )e−jw0 τ dτ ][kej(w0 t+φ) ] = H(w0 )ũ(t).
0
In view of ũ(t) = k cos(w0 t+φ)+jk sin(w0 t+ϕ), the steady state response to u(t) = k sin(w0 t+φ) =
Im(ũ(t)) is y(t) = Im(ỹ(t)). Write H(w0 ) = |H(w0 )|ejθ from which
yss (t) = k|H(w0 )| sin(w0 t + φ + θ).
(b) Completely analogous to scalar case.
Exercise 3.36
(a) The response of the double integrator, using the different state-space representation is shown
in Figure 3.1.
Figure 3.1: Unit step response of the double integrator, (a) Continuous-time representation, (b)
Discrete-time representation, Ts = 0.5s, (c) Discrete-time representation, Ts = 1s, (d) Discrete-time
representation, Ts = 5s
35
(b) The discrete-time representation of the double integrator is
1 T
Ā = eAT =
,
0 1
Z
!
T
Aτ
e
B̄ =
dτ
B=
0
T 2 /2
T
,
C̄ = C = [1, 0].
(c) As the sampling period increases, the discretization effects become obvious in the unit step
response.
Exercise 3.37
(a) The eigenvalues of A are λ1,2 = −0.0286 ± i0.5303, λ3,4 = −0.0250 ± i0.3152, the right
eigenvectors of A are
Q =
[v1 , v2 , v2 , v3 , v4 ]

−0.6242 − i0.0189 −0.6242 + i0.0189 0.6724 − i0.0495
0.6724 + i0.0495
 0.0279 − i0.3305
0.0279
+
i0.3305
−0.0012
+
i0.2132
−0.0012
− i0.2132
= 
 0.6242 + i0.0189
0.6242 − i0.0189
0.6724 − i0.0495
0.6724 + i0.0495
−0.0279 + i0.3305 −0.0279 − i0.3305 −0.0012 + i0.2132 −0.0012 − i0.2132
and the left eigenvectors are


v˜1
 v˜2 

Q−1 = 
 v˜3 
v˜4

−0.3995 + i0.0337 0.0228 + i0.7546 0.3995 − i0.0337 −0.0228 − i0.7546
 −0.3995 − i0.0337 0.0228 − i0.7546 0.3995 + i0.0337 −0.0228 + i0.7546
= 
 0.3720 − i0.0021 0.0864 − i1.1731 0.3720 − i0.0021 0.0864 − i1.1731
0.3720 + i0.0021 0.0864 + i1.1731 0.3720 + i0.0021 0.0864 + i1.1731
Assuming that f1 = f2 = 0, we have
x(t)
= eAt x0
=
4
X
!
λi t
Ai e
x0
i=1
where
Ai = vi v˜i ,
i = 1, 2, 3, 4.
(b) The states for x(0) = [1, 0, −0.5, 0]T and f1 = f2 = 0 are plotted in Figure 3.2.
(c) The transfer function of the system is
2
1
s + 0.0536s + 0.1910
−0.0036s − 0.0910
H(s) =
,
D(s) s3 + 0.0536s2 + 0.1910s −0.0036s2 − 0.0910s
D(s) = s4 + 0.1072s3 + 0.3849s2 + 0.0198s + 0.0282


,



.

36
CHAPTER 3.
Figure 3.2: x(0) = [1, 0, −0.5, 0]T and f1 = f2 = 0
Figure 3.3: x(0) = [0, 0, 0, 0]T and f1 (t) = δ(t), f2 = 0
(d) For zero initial conditions, f1 (t) the unit impulse, and f2 (t) = 0, the states are plotted in
Figure 3.3. The system is asymptotically stable and will reach the equilibrium.
(e) Using the same approach, we can obtain the corresponding results for the different values of
α. The plots of the states for the cases α = 0.1 and α = 5 follow in Figures 3.4 and 3.5 respectively.
In general, for large α the system becomes more oscillatory, and the effect of the unit impulse lasts
longer.
Exercise 3.38
(a) (i) For c = 0 the eigenvalues of A are λ1,2 = ±i1.9847, λ3,4 = ±16.3318, (ii) for c = 375
(N sec/m), λ1,2 = −0.4112 ± i1.9831, λ3,4 = −6.3388 ± i14.6955, (iii) for c = 750 (N sec/m),
λ1,2 = −12.5775 ± i7.8279, λ3,4 = −0.9225 ± i1.9840, (iv) for c = 1125 (N sec/m), λ1 = −32.9903,
λ2 = −3.6004, λ3,4 = −1.9546 ± i2.2417.
T
(b) The plots of the states when the input u(t) = 61 sin 2πvt
20 and x(0) = [0, 0, 0, 0] are shown in
Figures 3.6, 3.7, 3.8 and 3.9 for the cases c = 0, v = 9, 36 (m/sec) and c = 750 (N sec/m), v = 9, 36
(m/sec). The plots are similar for the remaining cases. From these plots, we can see that as the
damping constant of the dashpot increases, the amplitude of the oscillations decreases. An increase
in the velocity of the car causes an increase in the frequency of the oscillations and a decrease of the
amplitudes.
37
Figure 3.4: α = 0.1, (i) x(0) = [1, 0, −0.5, 0]T and f1 = f2 = 0, (ii) x(0) = [0, 0, 0, 0]T and
f1 (t) = δ(t), f2 = 0
38
CHAPTER 3.
Figure 3.5: α = 5, (i) x(0) = [1, 0, −0.5, 0]T and f1 = f2 = 0, (ii) x(0) = [0, 0, 0, 0]T and f1 (t) =
δ(t), f2 = 0
Figure 3.6: (i) c = 0 (N sec/m) and v = 9 (m/sec)
39
Figure 3.7: (i) c = 0 (N sec/m), and v = 36 (m/sec)
Figure 3.8: (i) c = 750 (N sec/m) and v = 9 (m/sec)
Figure 3.9: (i) c = 750 (N sec/m) and v = 36 (m/sec)
40
CHAPTER 3.
Chapter 4
Exercise 4.1
Solving

 x1 − x2 + x3 = 0
2x1 + 3x2 + x3 = 0

3x1 + 2x2 + 2x3 = 0,
we know that the set of equilibria is
{(−4v, v, 5v)T : v ∈ R}
Exercise 4.2
Clearly, (0, 0)T is an equilibrium. For x1 6= 0, solve
x2 = 0
x1 sin x11 = 0.
We get
1
x1 = kπ
,
x2 = 0.
k ∈ N \{0}
Hence, the set of equilibria is
{(
1
, 0)T : k ∈ N \{0}} ∪ {(0, 0)T }.
kπ
Exercise 4.3
Clearly, x = 0 and x = 1 are two equilibria. Solving (4.90) directly with x(t0 ) = x0 , we obtain
x0
x(t) =
.
x0 − (x0 − 1)et−t0
For the equilibrium x = 0, for every 0 < < 12 , let δ() = 2 . Then if |x0 | < δ() =
for t ≥ t0 ,
|x0 |
|x0 + (1 − x0 )et−t0 |
|x0 |
≤
|(1 − x0 )et−t0 | − |x0 |
|x0 |
|x0 |
≤
≤
|1 − x0 | − |x0 |
1 − 2|x0 |
≤ 2|x0 | < .
|x(t)| =
41
2
< 14 , we have
42
CHAPTER 4.
Hence, x = 0 is stable by Definition 4.6.
Further, for η(t0 ) = 2 ( < 12 ), let T () = ln 2. Then for |x0 | < η(t0 ), t > t0 + T (), we have
|x(t)| ≤
|x0 |
2
≤
1
t−t
|1 − x0 |e 0 − |x0 |
(1 − 2 )et−t0 −
1
2
<
= .
eln 2 − 1
Therefore, by Definition 4.7, x = 0 is asymptotically stable.
For the equilibrium x = 1, let x̃0 = x0 − 1. We obtain
x̃(t) =
x̃0 et−t0
, t ≥ t0 .
1 + x̃0 − x̃0 et−t0
Clearly, Definition 4.6 is not satisfied.
No matter how small δ = δ(, t0 ) > 0 is, there exists
x̃0 = 2δ , such that for t = t0 + ln 1+δ
, we always have
δ
x̃0 et−t0
= 1 + δ.
1 + x̃0 − x̃0 et−t0
Hence, x = 1 is not stable, i.e. it is unstable.
Exercise 4.4
We have x = P y, where P is nonsingular. From this we have y = P −1 x and k x k≤k P kk y k
and k y k≤k P −1 kk x k. It now follows from the definitions of stability (resp., asymptotic stability
and instability) that the equilibrium x = 0 of (4.16) is stable (resp., asymptotically stable, unstable)
if and only if y = 0 of (4.19) is stable (resp., asymptotically stable, unstable).
Exercise 4.5
Since
1 > 0, 5 > 0, 10 > 0
1 2
5 −1
det
= 1 > 0, det
= 49 > 0,
2 5
−1 10


1 2
1
1 1


det 2 5 −1 = 0, det
= 9 > 0,
1 10
1 −1 10
by Proposition 4.20, A is positive semidefinite.
Exercise 4.6
For
1
P =
180
−34
43
43
−16
.
we have
AT P + P A = I.
Hence by Theorem 4.26, the trivial solution of the given system is unstable.
Exercise 4.7
Clearly, x = 0 and x = 1 are the only equilibria of ẋ1 = −x + x2 . Since
∂f
∂x
= −1,
x=0
∂f
∂x
= 1,
x=1
43
by Theorem 4.32 and Theorem 4.33, respectively, we know x = 0 is exponentially stable and x = 1
is unstable.
Exercise 4.8
The linearization of (4.91) is given by
ẋ1 = x2
ẋ2 = −x1
Let v(x1 , x2 ) = x21 + x22 . Along the solutions, we have
v̇(x1 , x2 ) = 0.
Hence, by Theorem 4.22, the trivial solution is stable.
Now, along the solutions of (4.91), we have
v̇(x1 , x2 ) = 2(x21 + x22 )2 = 2v 2 .
Hence
v(t) = x21 (t) + x22 (t) =
x21 (0) + x22 (0)
, t0 = 0
1 − 2(x21 (0) + x22 (0))(t − t0 )
which implies that (4.91) is unstable.
Exercise 4.9
Consider
ẇ =
−1 0
1 −1
w
The eigenvalues of this system are λ1 = λ2 = −1. Therefore, the equilibrium w = 0 is exponentially
stable. Therefore, by Corollary 4.38, the system
ẋ = Ax + Bu
y = Cx
is BIBO stable.
Exercise 4.10
(a) Solving
x2 + |x1 | = 0
−x1 + |x2 | = 0,
we know that the set {(α, −α)T ∈ R2 : α ≥ 0} consists of the equilibrium points of system (i).
(b) Solving
x1 x2 − 1 = 0
2x1 x2 + 1 = 0
we know that there exists no equilibrium for this system.
Exercise 4.11
The proof of Theorem 4.43 is outlined in the paragraph preceding that theorem. Please fill in
the details of the proof.
44
CHAPTER 4.
Exercise 4.12
Choose
v(x1 , x2 ) = x21 + x22 =k x k22 .
Then
[(cos φ) x1 + (sin φ) x2 ]2 + [(− sin φ) x1 + (cos φ) x2 ]2
−(x21 + x22 ) = 0
x(k) , (x1 (k), x2 (k))T
Dv(x1 , x2 )
⇒
⇒
⇒
=
Dv(x1 (k), x2 (k)) = Dv(x(k)) = v(x(k + 1)) − v(x(k)) = 0
v(x(k + 1)) = v(x(k)) for all k ≥ k0 ≥ 0
k x(k + 1) k22 =k x(k) k22 for all k ≥ k0 ≥ 0
k x(k + 1) k2 =k x(k) k2 =k x(k0 ) k2 .
For every > 0, if 0 < δ < , then k x(k0 ) k2 < δ implies that k x(k) k2 < for all k ≥ k0 .
It follows from Definition 4.6 that the equilibrium x = 0 is stable.
Exercise 4.13
Choose
v(x) = x2 .
Then
Dv(x) = (sin x)2 − x2 ≤ 0.
⇒
⇒
⇒
⇒
Dv(x(k)) = v(x(k + 1)) − v(x(k)) ≤ 0
v(x(k + 1)) − v(x(k)) ≤ 0
v(x(k + 1)) ≤ v(x(k))
x(k + 1)2 ≤ x(k)2
|x(k + 1)| ≤ |x(k)| for all k ≥ k0 .
For every > 0, if |x(k0 )| ≤ δ < , then |x(k0 )| < δ implies that |x(k)| < for all k ≥ k0 .
It follows from Definition 4.6 that the equilibrium x = 0 is stable.
Exercise 4.14
Choose
v(x1 , x2 ) = v(x) = x21 + x22 =k x k22 .
Then
Dv(x1 , x2 )
⇒
⇒
= (x1 + x2 (x21 + x22 ))2 + (x2 − x1 (x21 + x22 ))2 − x21 − x22
= (x21 + x22 )(x21 + x22 )2 = (x21 + x22 )3 > 0 for all x 6= 0.
Dv(x(k)) = v(x(k + 1)) − v(x(k)) > 0 for all k ≥ k0 , x 6= 0
k x(k + 1) k22 >k x(k) k22 for all k ≥ k0
k x(k + 1) k2 >k x(k) k2 for all k ≥ k0 .
Therefore, for every x(k0 ), k x(k, k0 ) k2 will increase for all k ≥ k0 . It follows from Definitions 4.6
and 4.9 that the equilibrium x = 0 is unstable.
45
Exercise 4.15
0
−6
The eigenvalues of A =
1
2
1
3
1
5
are λ1 = 2, λ2 = 3. By calculation we obtain
−1 0
−6
1
5
1
2
1
3
=
2
0
0
3
.
Hence,
Φ(k)
k
−1
1 1
2
0
1 1
= Ak =
2 3
2 3
0 3k
k
k
k
3·2 −2·3
−2 + 3k
=
.
3 · 2k+1 − 2 · 3k+1 −2k+1 + 3k+1
The two columns of Φ(k) consist of a basis of the solution space. Clearly, since 2 > 1, 3 > 1, the
trivial solution of this system is not stable.
Exercise 4.16
(Sufficiency) If all the eigenvalues of A satisfy |λ| < 1, then since A is similar to its Jordan form,
we know by direct computation that
lim k Ak k= 0.
k→∞
(Necessity) For any eigenvalue λ of A, with p an eigenvector, we have
Ap = λp.
Hence, successively, we obtain
Ak p = λk p.
Now, since
0 ≤k Ak p k≤k Ak k ·|p|,
we must have
0 = lim |λk p| = lim |λ|k · |p|.
k→∞
k→∞
Hence,
lim |λ|k = 0,
k→∞
which implies that |λ| < 1.
Exercise 4.17
Consider the solution with the initial value x(0) = α(1, . . . , 1)T (α 6= 0). We know from the
original system that
xn−1 (k + 1)
1 1
xn−1 (k)
=
.
xn (k + 1)
0 1
xn (k)
Hence,
xn−1 (k)
xn (k)
=
1
0
1
1
k xn−1 (0)
xn (0)
=
1 k
0 1
α
α
=
(k + 1)α
α
which implies that limk→∞ |x(k)| = ∞. Therefore, the equilibrium x = 0 is unstable.
,
46
CHAPTER 4.
Exercise 4.18
(a) A has eigenvalues λ1 = 1, λ2 = −1 with multiplicity
 2
z − 28
1

0
(zI − A)−1 =
(z − 1)(z 2 − 28)
0
n1 = 2 and n2 = 1. Since

z − 17 −2z + 5
z 2 − 1 3z − 3  ,
9z − 9 (z − 1)2
we know that
lim [(z − 1)2 (zI − A)−1 ] = 0.
z→1
Hence, by Theorem 4.47, x = 0 is stable.
(b) This system has the same eigenvalues as the system in (a), and
 2

z − 28
−18
2(z − 1)
1

0
z 2 − 1 3(z − 1)  .
(zI − A)−1 =
(z − 1)(z 2 − 28)
0
9(z − 1) (z − 1)2
Similarly, as in (a), it follows that x = 0 is stable.
Exercise 4.19
If x = 0 of x(k + 1) = eA x(k) is asymptotically stable, then all the eigenvalues of eA lie inside
the unit circle, which implies that all the eigenvalues of A have negative real parts, which is also
equivalent to the asymptotic stability of the equilibrium x = 0 of ẋ = Ax.
Exercise 4.20
Choose B = −I and evaluate (4.75). It follows from Theorem 4.49 that the trivial solution is
unstable.
Exercise 4.21
The linearization of the system
x(k + 1) =
2
1
x(k) + sin x(k)
2
3
is
7
x(k).
6
Hence by Theorem 4.54, the trivial solution is unstable.
Exercise 4.22
Either by Theorem 4.56 or by Theorem 4.57, we know that this system is not BIBO stable.
x(k + 1) =
Exercise 4.23
The proof of Theorem 4.47 is outlined in the paragraph preceding that theorem. Please fill in
the details of the proof.
Exercise 4.24
To prove Theorem 4.49, please fill in the details given in the hint in the paragraph following that
theorem.
Exercise 4.25
The proof of Theorem 4.54 is given in the book Linear Systems by P.J. Antsaklis and A.N. Michel,
Birkhäuser, 2006. Alternatively, fill in the details given in the hint in the paragraph following that
theorem in the textbook.
Chapter 5
Exercise 5.1.
(a) In view of the Cayley–Hamilton theorem the columns of Ak B for k ≥ n are linearly dependent
on the columns of Cn = [B, AB, . . . , An−1 B] and so range(Ck ) = range(Cn ) for k ≥ n, since range(Ck )
is spanned by the columns of Ck . That range(Ck ) ⊂ range(Cn ) for k < n is obvious.
(b) For k ≥ n, null(Ok ) = null(On ) in view of the Cayley–Hamilton. When k < n, if x ∈
null(On ), then x ∈ null(Ok ), since x satisfies CAi x = 0 for i = 0, 1, . . . , n−1. Therefore, null(Ok ) ⊃
null(On ), k < n.
Exercise 5.2.
(a) Since rank[B, AB, A2 B, A3 B] = 4, the system is controllable from u. Note also that (A, B)
is in controller form. Since rank[C T , (CA)T , (CA2 )T (CA3 )T ] = 4, the system is observable from
y.
(b) If the radial thruster fails, that is u1 = 0, consider the controllability matrix
C2 = [b2 , Ab2 , A2 b2 , A3 b2 ],
where b2 = [0, 0, 0, 1]T . Since rank(C2 ) = 4, the system is controllable from u2 .
Similarly, if the tangential thruster fails, consider
C1 = [b1 , Ab1 , A2 b1 , A3 b1 ],
where b1 = [0, 1, 0, 0]T . Since rank(C1 ) < 4, the system is not controllable from u1 .
(c) For C1 = (1 0 0 0), we have rank[C1T , (C1 A)T , (C1 A2 )T , (C1 A3 )T ]T < 4. Therefore,
the system is not observable from y1 .
For C2 = [0 0 1 0], we have rank[C2T , (C2 A)T , (C2 A2 )T , (C2 A3 )T ]T = 4. Therefore, the
system is observable from y2 .
Exercise 5.3.
(a) In view of Corollary 5.15,
u(t) = B T eA
T
(T −t)
Wr−1 (0, T )(x1 − eAT x0 ).
For x0 = [a, b]T , x1 = [0, 0]T , we obtain
u(t)
=
3T
1 t/2 b − 3T
a
e
e 2 (1 − e− 2 ) − e−T (1 − e−2T )
2
3
2
3T
a − 3T
b
+ et
e 2 (1 − e− 2 ) − e−2T (1 − e−T )
3
4
1
∆
where
∆=
1
1
2 3T
1
1
− e−T + e− 2 − e−2T + e−3eT .
72 8
9
8
72
47
48
CHAPTER 5.
(b) Easy to obtain plots. Note that the magnitude of u(t) gets larger when T decreases.
Exercise 5.4.
(a) Since x1 ∈ range[B AB], it can be reached in two steps, with u(0) = 3, u(1) =
 −1.
0 1
(b) A basis for the reachability subspace range[B, AB, A2 B] is  1 0 . Any x =
1 0




0 1 b
 1 0  a =  a  will be reachable.
b
1 0
a
 


0
0
(c) A basis for the unobservable subspace is  0  and x =  0  is unobservable.
1
a
(d) Since x1 ∈ range(C), x1 is reachable. The time required to transfer the state from [0, 0]T to
x1 is T , where T is any positive real. The input can be computed (Theorem 5.12) by
u(t) = B T eA
where η1 is such that Wr (0, T ) η1 = x1 . Here
 2T T 2
e ( 2 − T2 + 14 ) − 41
Wr (0, T ) = 
e2T ( T2 − 14 ) + 14
e2T ( T2 − 14 ) + 14
T
(T −t)
η1
e2T ( T2 − 14 ) +
1 2T
− 1)
2 (e
1 2T
(e
− 1)
2
1
4
e2T ( T2 − 14 ) +
1 2T
− 1)
2 (e
1 2T
(e
− 1)
2
1
4


where rank Wr (0, T ) = 2 for any T > 0. Since range Wr (0, T ) = range(C), (Lemma 5.10), a solution
η1 exists.
Exercise 5.5.
(a) =⇒ If rank[D CB · · · CAn−1 B] = p, full rank, then we can always find u(0), · · · , u(n),
so that


!
u(n)
n−1
X


CAn−(i+1) Bu(i) + Du(n)
y1 − CAn x(0) = [D CB · · · CAn−1 B]  ...  =
u(0)
i=0
that is, any y1 can be reached at the nth step (y(n) = y1 ). Therefore, the system is output reachable.
Note that y0 = y(0) = Cx(0) + Du(0). So the condition is satisfied.
⇐= If the system is output reachable, then it is easy to show that each y1 can be reached in
at most n steps. Therefore, from the equation above, we conclude that the condition
rank[D CB · · · CAn−1 B] = p
is also necessary.
(b) If (A, B) is reachable, then rank(C) = n, where C is the controllability matrix. Since D = 0
and [CB · · · CAn−1 B] = C[B · · · An−1 B] = CC, we have (from Sylvester’s inequality) that
n + rank(C) − n ≤ rank(CC) ≤ min{rank(C), rank(C)}
=⇒
rank(C) ≤ rank(CC) ≤ rank(C)
=⇒
rank(CC) = rank(C)
where we have made the assumption that p ≤ n. From the above equation, we see that the
system is output reachable (⇔ rank(CC) = p), if and only if rank(C) = p.
49
(c) (i) Since rank[CB CAB CA2 B] = 1, the system is output reachable. Since
rank[B AB A2 B] = 2, the system is not state reachable.
(ii)
When x(0) = [0, 0, 0]T , then one step is enough. For u(0) = 3, we have x(1) =
T
[3, 0, 3] =⇒ y(1) = 3.
When x(0) = [1, −1, 2]T , one step is enough again, as expected. For u(0) = 0, we have x(1) =
[1, 2, −2]T =⇒ y(1) = 3.
Exercise 5.6.
(a) =⇒
If the rows of H(s) are linearly independent, we will show that the system is output
reachable. Suppose it is not output reachable. Then
rank[D CB · · · CAn−1 B] < p =⇒
∃ x 6= 0 with xT ∈ Rp so that
x[D CB · · · CAn−1 B] = 0
=⇒ xD = xCB = · · · = xCAn−1 B = 0
From thePCayley–Hamilton theorem, the above also implies that xCAλ B = 0 ∀λ ≥ n. Since
∞
H(s) = D+ k=1 CAk−1 Bs−k , we obviously have from the above that ∃ x 6= 0 such that xH(s) = 0.
This implies that the rows of H(s) are not linearly independent, which is a contradiction. Therefore,
the initial assumption is false. Hence, the system is output reachable.
⇐=
If the system is output reachable, then any vector yp ∈ Rp can be reached in finite time.
Hence we can find an appropriate input u(s) so that yp = y(tp ) = L−1 {H(s)u(s)} for some t = tp .
Suppose that the rows of H(s) are linearly dependent, that is ∃ x 6= 0 so that xH(s) = 0.
Consider now
xyp
= xL−1 {H(s)u(s)}|t=tp
=⇒ xyp
= L−1 {xH(s)u(s)}|t=tp
= 0
Since this can be shown for every yp , we realize that x has to be the zero vector, which implies
that the initial assumption x 6= 0 is not true. Therefore, we conclude that the rows of H(s) are
linearly independent.
Since the rows are linearly independent over the field of complex numbers
H(s) =
1
s+2
s
s+1
is output reachable.
(b) The system of Exercise 5.5, H(z) =
1
z−1
is output reachable.
Exercise 5.7.
One could write Kirchhoff’s laws to derive an {A, B, C, D} representation and then show the
results. Alternatively, consider as state variable x the voltage across the capacitor. If x(t0 ) = 0,
then x(t) = 0 for all t ≥ t0 , no matter what input is applied. This is due to the symmetry of the
network and the fact that the input has no effect on the voltage across the capacitor. Therefore, the
system is not state reachable.
If the input is zero, then in view of the symmetry of the network, the output is identically zero,
no matter what the initial voltage across the capacitor is. We know the input and the output (both
zero), but we can not determine the initial condition of the capacitor. Therefore, the system is not
observable.
Since y = u, the system is obviously output reachable.
50
CHAPTER 5.
Exercise 5.8.
(a) =⇒
If rank H(s) = p, then rank[H(s)H T (s)] = p. If HR (s) = H T (HH T )−1 then
H(s)HR (s) = Ip . Therefore, a right inverse HR (s) exists.
⇐=
Suppose ∃ HR (s) so that H(s)HR (s) = Ip . Suppose rank H(s) < p and let a nonzero
vector x ∈ Rp so that xT H(s) = 0. Then xT H(s)HR (s) = xT Ip = 0. This is true only for x = 0,
which implies that rank H(s) = p.
(b) =⇒
Assume that the system has a right inverse HR (s). Then, if we want the output to
follow a particular trajectory for t ≥ 0 described by, say ỹ(s), we can always choose u(s) = HR (s)ỹ(s).
Since we can always find such an input, the system is output function controllable.
⇐=
Assume that the system is output function controllable; this implies that for any rational
vector y(s), ∃ u(s) such that y(s) = H(s)u(s), that is y(s) ∈ range H(s). If now rank H(s) were less
than p then there would be a y(s) not in the range H(s). Therefore, rank H(s) has to be equal to p,
since y(s) is arbitrary. In view of (a) this is true if and only if H(s) has a right inverse.
(c) Dual to (a); completely analogous proof.
(d) All inputs u(s) = [u1 (s) u2 (s)]T so that
1
1
s+1
u1 (s) + u2 (s) =
s
s
s
=⇒ (s + 1) u1 (s) + u2 (s) = 1
u1 (s)
0
1
=⇒
=
+
t(s)
u2 (s)
1
−(s + 1)
where t(s) is any rational function so that u1 (s), u2 (s) are proper rational. Then u(t) = L−1 [u(s)].
Exercise 5.9.
In view of the Cayley–Hamilton theorem, we can write
H(s) = D +
∞
X
CAi Bs−(i+1) = [D CB CAB · · · CAn−1 B] r(s)
i=0
where r(s) is some rational vector (see also Exercise 3.34). Now, output function controllability
⇐⇒ rank H(s) = p. If the system were not output controllable, we would have
rank[D CB · · · CAn−1 B] < p and rank H(s) < p.
Therefore output function controllability implies output controllability.
Exercise 5.10.
(a) Since [CB
CAB] =
1
2
2
3
is of full rank, such an input exists. In view of (5.83) select
u(0) = −1, u(1) = 2.
(b) y(0) = [0, 0]T implies x(0) = [0, 0]T as well. Therefore
1
y(1) = Cx(1) = CBu(0) =
u(0).
2
Exercise 5.11.

 

y(0)

 y(1)  = 


y(2)


1
0
1
0
1
0


 




C


 =  CA  x0 = 




CA2


1
0
1
0
1
0
1
1
2
1
3
1
0
0
0
0
0
0




 x0 =⇒ x0 = [1 0 α]T , a ∈ R.



51
Exercise 5.12.
(a)


y(t)
 y (1) (t) 




..


.
(n−1)
y
(t)

=




C
CA
..
.
CAn−1






 x(t) + 


D
CB
..
.
0
D
···
···
CAn−2 B
CAn−3 B
···
 
u(t)
0
 u(1) (t)
0 
 
 
..
 
.
D
u(n−1) (t)





=⇒ Yn−1 (t) = Ox(t) + Mn Un−1 (t)
Since (A, C) is observable, rank O = n and a left inverse OL = (OT O)−1 OT . Hence
x(t) = OL [Yn−1 (t) − Mn Un−1 (t)].
(b) Let t = 0 in the equation above.
(c)



y(k)



..

 = Ox(k) + Mn 
.
y(k + n − 1)
=⇒ Yen−1 (k)
u(k)
..
.



u(k + n − 1)
en−1 (k)
= Ox(k) + Mn U
Since (A, C) is observable, the solution is as in (a) above.
52
CHAPTER 5.
Chapter 6
Exercise 6.1.
Software program.
Exercise 6.2.
(a) The first pair (A, B) is already in the standard
 form for uncontrollable systems. Therefore,
0 0 0
we have one uncontrollable eigenvalue, λ3 = 2, with  0 0 0  e2t the corresponding mode. This
0 0 1
was determined using Section 3.3.3 (relation (3.57)). Note that a continuous-time system is assumed.
Similar results are true (only 2k instead of e2t ) for the discrete-time case.
Concerning the second pair (A, B) let

0
 0
Q=
 1
0
1
0
0
0
1
1
0
0


0

0 
b=
 =⇒ A

0 
1
0
0
1
0
0
0
0
0


1
0 0
 0
0 0 
b=
, B
 0
0 0 
0
0 −1

0
1 

0 
0
in standard
form. Therefore,
we have one uncontrollable eigenvalue, λ4 = −1,


0 0 0 0
 0 0 0 0  −t

with 
 0 0 0 0  e the corresponding mode.
0 0 0 1
(b) We can easily see that in the first case
rank(λi I − A, B) = 3
rank(λ3 I − A, B) = 2
for λ1 = 1, λ2 = −1
for λ3 = 2
Therefore, λ3 = 2 is the only uncontrollable eigenvalue.
Similarly for the second pair
rank(λi I − A, B) = 4
rank(λ4 I − A, B) = 3
for λi = 0, i = 1, 2, 3
for λ4 = −1
Therefore, λ4 = −1 is the only uncontrollable eigenvalue.
53
54
CHAPTER 6.
Exercise 6.3.


0 0 1
4
 1 0 −3 −10 
 = [b1 b2 Ab2 A2 b2 ]
= 
 0 1 4
13 
0 0 −1 −3



1 1 0
1 1 0 −2
 1 0 0
 −1 0 1 3 


= 
 −3 0 0 −4  =⇒ P =  1 0 0
0 0 1
1 0 0 1


..
0
.
1
0
0


 ··· ··· ··· ··· ··· 




..
−1

.
0
1
0 
= P AP =  0



..

 0
.
0
0
1


..
1
.
1 −3 4


1
0
 ··· ··· 


0 
= PB = 

 0
 0
0 
0
1
C̄
C̄ −1
Ac
Bc

−2
1 

0 
0
The controllability indices are µ1 = 1, µ2 = 3.
Exercise 6.4.
It is not difficult to verify that Bc , Ac Bc , etc. have the given form (induction could be used for a
more formal proof). Also note that CC −1 = I where C and C −1 are in terms of αi ( and cj ) as given.
Exercise 6.5.
(i) Completely analogous to the development in (6.36)–(6.40).
(ii)
AQ2
=
=
[AB, A2 B, . . . , An B]



[B, AB, . . . , An−1 B] 

0 ···
1 ···
.. . .
.
.
0 ···
in view of the Cayley–Hamilton theorem. Also Q2 Bc2 = B.
(iii) Completely analogous to (ii).
0
0
..
.
−α0
−α1
..
.



 = Q2 Ac2

1 −αn−1
Exercise 6.6.
Following the Hint: Assume (A, b) is controllable, then v̂1 [λI−A, b] = [0, α1 ], v̂2 [λI−A, b] = [0, α2 ]
with α1 , α2 6= 0, which implies (α1−1 v̂1 − α2−1 v̂2 )b = 0. Now if v , α1−1 v̂1 − α2−1 v2 is nonzero then
v[λI − A, b] = 0 which implies the (A, b) is not controllable, contrary to the assumption. Therefore,
α1−1 v̂1 − α2−1 v̂2 = 0, which however again leads to a contradiction, since it implies either that v̂1 , v̂2
are linearly independent or α1 = α2 = 0. So (A, b) cannot be controllable.
Alternatively, assume that (A, b) is controllable and reduce it to controller form (Ac , bc ). Now
it is known (it can be easily shown) that [1, λi , . . . , λn−1
]T i = 1, . . . , n are eigenvectors of Ac
i
55
corresponding to the eigenvalues λi i = 1, . . . , n. This shows that if λ1 = λ2 = λ their corresponding
eigenvectors will be necessarily linearly dependent. So (A, b) cannot be controllable.
Exercise 6.7.
Since (A, B) is controllable with rank B = m, it can be reduced to a controller form (Ac , Bc )
where (see (6.55) of Chapter 6)
Ac = Āc + B̄c Am ,
Bc = B̄c Bm .
The n − m rows of Āc are linearly independent and the remaining m rows of Ac , given by Am , may
increase the number of linearly independent rows. So rank A ≥ n − m.
Alternately, rank C = n = rank[C, An B] = rank[B, AC] from which rank AC ≥ n − m. From
Sylvester’s Rank Inequality, rank AC = rank A and so rank A ≥ n − m.
An additional way to prove this result is as follows. (A, B) controllable implies rank[λi I −A, B] =
n for any λi . That is rank[λi I − A] ≥ n − m for any λi . Take λi = 0. Then rank A ≥ n − m.
Exercise 6.8.
b A
bB,
b ...,A
bn−1 B,
b A
bn B] implies P B = B;
b P AB = A
bB
b = AP
b B
P [B, AB, . . . , An−1 B, An B] = [B,
k
2
2
b=A
bAP
b B = AP
b AB or (P A − AP
b )AB = 0; similarly, P A B =
b )B = 0; P A B = A
b B
or (P A − AP
kb
k−1
b
b
b =0
A B implies (P A− AP )A
B = 0 k = 1, . . . , n. Therefore (P A−AP )C = 0 from wich P A− AP
−1
b
b
since C has full row rank n. So B = P B and A = P AP .
Exercise 6.9.
First we show that (Āc , B̄c ) is controllable. Let B̄c = [b̄1 · · · b̄m ]. Define ei = [0 · · · 1 0 · · · 0]T ,
where the i-th element is the only nonzero element. It is easy to see that
= eµ1 ,
Āc b1 = eµ1 −1 , . . . ,
Āµc 1 −1 b̄1 = e1
= eµ1 +µ2 , Āc b2 = eµ1 +µ2 −1 , . . . , Ācµ2 −1 b̄2 = eµ1 +1
..
.
= en ,
...,
...,
Āµc m −1 b̄m = en−µm +1
b̄1
b̄2
b̄m
Hence, rank[B̄c Āc B̄c · · · Ān−1
B̄c ] = n, since we have the n linearly independent columns
c
shown above. Therefore, the system ẋc = Āc xc + B̄c u is reachable. It can also be easily verified,
using the controllability matrix, that the controllability indices of (Ā, B̄) are µi .
Apply now a state feedback u = Am x + Bm r to obtain the pair (A, B). Since (A, B) and (Ā, B̄)
have identical controllability indices (see Exercise 6.11), the indices of (A, B) are also µi .
Exercise 6.10.
Since

C¯k
=

[BG · · · Ak−1 BG] = [B · · · Ak−1 B] 

G
..


.
G
= Ck [block diag G]
and |G| 6= 0, we have that rank(C¯k ) = rank(Ck ), k ≥ 1. This implies that the number of linearly
56
CHAPTER 6.
independent columns is the same in both Ck and C¯k , k ≥ 1. Hence, let
rank(C1 ) = rank(C¯1 ) = rank(B) = σ1
rank(C2 ) = rank(C¯2 ) = rank[B AB] = σ1 + σ2
..
.
rank(Cν ) = rank(C¯ν ) = rank[B · · · Aν−1 B] = σ1 + · · · + σν = n, ν ≤ n.
It can easily be shown that σ1 ≥ σ2 ≥ · · · ≥ σν . From the above, it is obvious that there are
σν columns in Aν−1 B that belong to the set of linearly independent columns. Let i1 , . . . , iσν be
their indices. This implies that µi1 = · · · = µiσν = ν. Similarly, there are σν−1 columns in Aν−2 B
that belong to the set of linearly independent columns. The indices of σν of them are the ones
found in the previous step. Let j1 , · · · j(σν−1 −σν ) be the indices of the rest of these columns. Then
µj1 = · · · = µj(σν−1 −σν ) = ν − 1. We continue this process until we find the σ1 columns of B that
belong to the set of independent columns. As before, there are σ1 − σ2 of these columns with indices
k1 , · · · k(σ1 −σ2 ) so that µk1 = · · · = µk(σ1 −σ2 ) = 1.
In view of the above rank equalities and the described procedure, we see that the controllability
indices of (A, BG) are exactly the same as those of (A, B), within possible reordering.
Exercise 6.11.
(a) Write
[λi I − A − BF, BG] = [λi I − A, B]
I
−F
0
G
and note that rank[λi I − A − BF, BG] = rank[λi I − A, B] for any complex number λi . This
implies that (A + BF, BG) is reachable if and only if (A, B) is reachable in view of the eigenvalue
controllability test.
Alternatively, this result can be derived by expressing the controllability matrix of (A + BF, BG)
in terms of the controllability matrix of (A, B) (see (9.8) in Chapter 9) as follows:
[B, (A + BF )B, . . . , (A + BF )n−1 B] [block diag G]

I F B F (A + BF )B
 0
I
FB


n−1
I
= [B, AB, . . . , A
B] 

..

.
·
·
·




 [block diag G].


I
This shows that the ranks of the open- and closed-loop controllability matrices are exactly the same.
(b), (c) These results are derived by considering the above relation that involves the controllability
matrices. By considering the first n linearly independent columns from left to right, it is not difficult
to see that only similarly numbered columns are involved in both contrallability matrices. This
implies that the controllability indices of (A + BF, B) and (A, B) are identical. When G(|G| =
6 0) is
also involved, the controllability indices are the same within reordering.
Exercise 6.12.
(a) See Hint.
(b) In view of Example 3.33 of Chapter 3, a discrete-time model of the double integrator is
2
1 T
b = T /2 , C̄ = [1 0]
Ā =
,B
0 1
T
57
It is controllable and observable for any sampling period T 6= 0 in view of (a) since λ1 = λ2 = 1. To
verify this, note that
2
T /2 3T 2 /2
det[B̄, ĀB̄] = det
= −T 2 6= 0
T
T
C̄
1 0
and det
= det
= T 6= 0.
C̄ Ā
1 T
When A =
0
−1
1
0
,B =
RT
3.19) B̄ = ( 0 eAτ dτ )B =
0
cos T
sin T
, C = [1, 0] then Ā = eAT =
(see also Exercise
1
− sin T cos T
sin T
cos T − 1
0
cos T − 1
=
, and C̄ = C = [1 0].
1 − cos T
sinT
1
sin T
Here
det[B̄, ĀB̄]
cos T − 1
sin T
= det
=
1 − cos T
sin T
2 sin T (cos T − 1)
which is 0 for T = kπ, k = 0, ±1, . . . . Also,
C̄
1
= det
det
C̄ Ā
cos T
0
sin T
= sin T
which is 0 for T = kπ, k = 0, ±1, . . . . That is, for T = kπ, k = ±1, ±2, . . . , the system is
uncontrollable and unobservable. Same results can be derived directly using (a).
Exercise 6.13.
(a) The controllability matrix of the system
C = B, AB, A2 B, A3 B

0 0
1
0
 1 0 −0.0536 −0.0036
= 
 0 0
0
−1
0 −1 0.0036
0.0536
is

−0.0536 −0.0036 −0.1881 −0.0906
−0.1881 −0.0906 0.0210
0.0111 

0.0036
0.0536
0.0906
0.1881 
0.0906
0.1881 −0.0111 −0.0210
and rank(C) = 4. Thus the system is controllable.
Observe that b42 = −1 6= 1. The controller form of the pair (A, B) is

..
.
0
0
..
−0.1910 −0.0536 . −0.0910 −0.0036
..
···
···
.
···
···
..
0
0
.
0
1
..
−0.0910 −0.0036 . −0.1910 −0.0536
0





Ac = 





1



Bc = 


0
1
···
0
0
0
0
···
0
1



.







,




58
CHAPTER 6.
(b) Let B = [b1 , b2 ]. Then
C
b1 , Ab1 , A2 b1 , A3 b1


0
1
−0.0536 −0.1881
 1 −0.0536 −0.1881 0.0210 
,
= 
 0
0
0.0036
0.0906 
0 0.0036
0.0906 −0.0111
=
and rank(C) = 4. Thus the system is controllable from input f1 only. Also
C
b2 , Ab2 , A2 b2 , A3 b2


0
0
−0.0036 −0.0906
 0 −0.0036 −0.0906 0.0111 

= 
 0
−1
0.0536
0.1881 
−1 0.0536
0.1881 −0.0210
=
and rank(C) = 4. Thus the system is controllable from input f2 only.
(c) The observability matrix is


C
 CA 

O = 
 CA2 
CA3

1
0
0
0

0
1
0
0


0
1
0
0

 −0.1910 −0.0536 0.0910
0.0036
= 
 −0.1910 −0.0536 0.0910
0.0036

 0.0106 −0.1881 −0.0056 0.0906

 0.0106 −0.1881 −0.0056 0.0906
0.0442
0.0210 −0.0344 −0.0111












and rank(O) = 4. Thus the system is observable.
The observer form of the pair (A, C) is
..
. 0


..

···
. ···


Ao =  −0.0282 ... 0


..

0
. 1

..
0
. 0

0
.
1 ..
Co = 
.
0 ..

0
···
0
0
1

1



···


−0.0198 


−0.3849 

−0.1072

0
0
0
0
0 
1
Chapter 7
Exercise 7.1.
b=
(a) Let A
A1
0
A12
A2
b = B1 the standard form for uncontrollable systems. Then
,B
0
eAt B
b
(QeAt Q−1 )(QB)
At
e 1
b b
= QeAt B
=Q
0
At
1
e B1
= Q
.
0
=
b
X
eA2 t
B1
0
Hence, only the controllable
of A in A1 appear in eAt B and therefore in the zero-state
R t modes
A(t−τ )
response of the system x(t) = 0 e
Bu(τ )dτ .
A1
0
b=
b = [C1 0] the standard form for unobservable systems.
(b) Similarly, let A
, C
A21 A2
Then
CeAt
b −1 )(QeAt Q−1 )
(CQ
b
b At
= Ce
Q−1
At
e 1
0
= [C1 0]
Q−1
X
eA2 t
=
=
Hence, only the observable modes of
response of the system y(t) = CeAt x(0).

A11
0
A13
0
 A21 A22 A23 A24
b=
(c) Let A
 0
0
A33
0
0
0
A43 A44
b
[C1 eA1 t 0] Q−1
A in A1 appear in CeAt and therefore in the zero input



B1



b =  B2  , C
b = [C1 0 C3 0] the Kalman decom, B

 0 
0
59
60
CHAPTER 7.
position form. Then
CeAt B
b
b −1 )(QeAt
b
(CQ
Q−1 )(QB)
b b
b At
= Ce
B
 2
A11
0
 4 A21 A22
 e
= [C1 0 C3 0] 


0
0
 A t

e 11 B1


X

= [C1 0 C3 0] 


0
0
=

3
5t B1
B2





= C1 eA11 t B1
Hence, only the controllable and observable eigenvalues of A in A11 appear in CeAt B and therefore
in the impulse response or the transfer function of the system
h(t) = CeAt B + Dδ(t) ⇐⇒ H(s) = C1 (sI − A11 )−1 B1 + D
(d) Straightforward; similar to (a), (b), (c).
(e) We can easily verify that
λ1 = 1 is both controllable and observable
λ2 = −2 is uncontrollable but observable
λ3 = −1 is controllable but unobservable.

1k
 =⇒ only controllable modes in Ak B
0
Ak B = 
k
(−1)

CAk = [1k (−2)k 0] =⇒ only observable modes in CAk
CAk B = 1k =⇒ only controllable and unobservable modes in CAk B.
Exercise 7.2.
Consider the state and output equations in Example 7.2 for R1 = R2 = R and R1 R2 C = L from
which for x(0) = [a, b]T we obtain
#
R
1 + (a − 1)e− L t
u(t)
=
1
1
−R
Lt
R + (b − R )e
R
a
1
and i(t) =
(b − )e− L t +
u(t)
R
R
x1 (t)
x2 (t)
"
1, t ≥ 0
. If a = 1, x1 (t) = 1 for t ≥ 0; if b = R1 , x2 (t) = R1 for t ≥ 0. If a = b = 0
0, else
then i(t) = R1 u(t), that is the circuit behaves as a constant resistance network; it exhibits similar
a
behavior when b = R
.
where u(t) =
61
Exercise 7.3.
(a) In Example 6.2 of Chapter 6, the pair (A, B) was reduced to the standard form for uncontrollable systems, namely,




0 1
1
1 0
b =  0 −1 0  , B
b= 0 1 
A
0 0 −2
0 0


1 0 0
b = Q−1 AQ, B
b = Q−1 B, x̂ = Q−1 x with Q =  1 1 0 ; −2 is the uncontrollable
where A
1 2 1
eigenvalue. Now
Z t
b
b
b
x(t) = Qx̂(t) = Q[eAt x̂0 +
eA(t−τ ) Bu(τ
)dτ ]
0
−t

1
2 (1
−2t

1 1−e
−e )
b
b
)b
. Clearly, eA(t−τ
e−t
0
B does not contain the uncontrollable
where eAt =  0
−2t
0
0
e
mode e−2t ; that is, the uncontrollable mode does not appear in the zero-state response. In general,
b
all modes appear in the zero-input response eAt x̂0 , although for specific directions of x̂0 certain
modes can be eliminated (see Subsection 3.3.3 in Chapter 3).
(b) The results are completely analogous to (a). Here


0 −(−1)k − 12 (−2)k
bk =  0 (−1)k

0
A
k
0
0
(−2)
Easy to plot using appropriate software.
Exercise 7.4.
(a) In Example 6.5 of Chapter 6, the pair (A, C) was reduced to the standard form for unobservable systems, namely
−2 0
b
b = [1, 0]
A=
, C
1 −1
0 1
−1
−1
b
b
where A = Q AQ, C = CQ, x̂ = Q x with Q =
; −1 is the unobservable eigenvalue.
1 −1
Now
Z t
b
b
)b
b At
b A(t−τ
y(t) = Ce
x̂0 +
Ce
Bu(τ )dτ
0
−2t
e
0
b
b
b At
where eAt =
. Clearly, Ce
= [e−2t , 0] does not contain the unobservable mode
e−t − e−2t e−t
e−t , which does not appear in the output.
It is worth observing that the transfer function is
1
s+3 1
0
−1
H(s) = C(sI − A) B = [1, 1]
−2 s
1
(s + 1)(s + 2)
s+1
1
=
.
=
(s + 1)(s + 2)
s+2
Note that s + 1, that contains the unobservable eigenvalue (output decoupling zero) was cancelled
out.
62
CHAPTER 7.
bk =
(b) The results are completely analogous to (a). Here A
(−2)k
(−1)k − (−2)k
0
(−1)k
to plot using appropriate software or by hand.
. Easy
Exercise 7.5.
Using the standard form or the eigenvalue/eigenvector tests, it can be shown that λ1 = 1 is both
controllable and observable, λ2 = − 21 is uncontrollable but observable, and λ3 = − 12 is controllable
but unobservable.
Now
x(k)
k
= A x(0) +
k−1
X
Ak−(i+1) Bu(i),
k>0
c=0

1
=  0
0



k−1
1
0
X
 x(0) +

 u(i),
0
0
1 k
1 k−(i+1)
i=0
(− 2 )
(− 2 )
0
(− 12 )k
0
k>0
k−1
y(k)
=
X
1
[1 (− )k 0] x(0) +
u(i),
2
i=0
k > 0.


1
 only the controllable eigenvalues λ1 , λ3 appear. In
0
It is now clear that in Ak B = 
(− 21 )k
CAk = [1, (− 12 )k , 0], it is clear that only the observable eigenvalues λ1 , λ2 appear. Finally in
CAk B = 1, only the controllable and observable eigenvalue λ1 appears.
Exercise 7.6.
(a) It is already in standard form for uncontrollable systems; λ3 = 2 is the uncontrollable
eigenvalue. It is also in the standard form for unobservable systems; again λ3 = 2 is the unobservable
eigenvalue.
(b) The impulse response is
−t
−t
1 1
e
0
1 0
e
e−t
=
H(t, 0) = CeAt B =
1 0
0 e−t
0 1
e−t
0
for t ≥ 0; H(t, 0) = 0 for t < 0. The transfer function matrix is
1
1 1
H(s) =
.
s+1 1 0
(c) Since λ3 = 2 is in the right half s-plane, the system is not asymptotically stable. It is however,
BIBO stable (see Chapter 4).
Exercise 7.7.
(s − 1)(s + 2)
0
s(s − 2)
1
N (s). To determine
(a) Write H(s) =
= d(s)
0
(s + 1)(s + 2)
0
the Smith form of N (s) (see (7.11) and (7.12) in Section 7.4) note that the determinental divisors
of N (s) are:
D0 = 1, D1= 1, D2 = s + 2 from which n1 = D1 /D0 = 1, n2 = D2 /D1 = s + 2 and
1
0
0
SN (s) =
. Dividing by d(s) we obtain the Smith–McMillan form of H(s)
0 s+2 0
1
s(s+2)
SMH (s) =
1 /ψ1
0
0
2 /ψ2
0
0
=
1/[s(s + 2)] 0 0
0
1/s 0
63
The minimal polynomial of H(s) is mH (s) = d(s) = s(s + 2). The characteristic polynomial of H(s)
is pH (s) = ψ1 ψ2 = s2 (s + 2). Note that the least common denominator of all first-order minors (the
nonzero entries of H(s)) and all second-order minors of H(s) (namely (s − 1)(s + 1)/s2 , (s + 1)(s −
2)s(s + 2)) is s2 (s + 2), the characteristic polynomial of H(s). The poles of H(s) are 0, 0, −2.
(b) The zero polynomial of H(s) is zH (s) = 1 2 = 1 and therefore H(s) does not have any finite
zeros.
Exercise 7.8.
1
s3
s(s2 + 1)
s+1
1
d(s) N (s).
The determinantal divisors of N (s) are D0 =
1
1, D1 = 1 from which n1 = D1 /D0 = 1 and the Smith form of N (s) is SN (s) =
. Dividing by
0
d(s) we obtain the Smith–McMillan form of H(s)
(a) Write H(s) =
=
1 /ψ1
0
SMH (s) =
=
1/s3
0
.
The minimal polynomial mH (s) = pH (s) the characteristic polynomial of H(s) and it is equal to
d(s) = s3 (the least common denominator of all minors of H(s)). The poles of H(s) are at 0, 0, 0.
(b) The zero polynominal of H(s) is zH (s) = 1 = 1 and there are no finite zeros.
Exercise 7.9.
b1 (w) =
(a) Let s = 1/w. Then R
1
(1/w)+1
w
w+1
=
has a zero at 0, that is R1 (s) =
1
s+1
has a zero at
b2 (w) = 1/w
infinity. Note that R1 (s) has no finite zeros and it has one finite pole at −1. Similarly, R
has a pole at 0, that is R2 (s) = s has
a
pole
at
infinity.
Note
that
R(s)
has
a
finite
zero
at 0 and
1
0
1/w
0
b
no finite poles. R3 (w) = 1+w
which has a Smith–McMillan form SMHb (w) =
,
0
w
1
w
1
0
that is, it has a pole at 0 and a zero at 0. Therefore, R3 (s) =
has a pole at infinity
s+1 1
and a zero at infinity. Note that R3 (s) has no finite poles or zeros.
b
(b) lims→∞ R(s) = limw→0 R(w) which is not finite when R(w)
has a pole at the origin or
equivalently when R(s) has a pole at infinity.
Exercise 7.10.
(a) They are rc since

rank
R(s)
P (s)

0
1 
 = 2 for s = 0.
0 
0
0
 −1
= rank 
 0
−1
Note that s = 0 is the only zero of det R(s).
(b) They are not lc since
rank
R(s),
P (s)
= rank
Applying Lemma 7.20, a glcd is G?L (s) =
0
−1
s 0
0 1
0 0 0
1 −1 0
.
= 1 when s = 0.
64
CHAPTER 7.
Exercise 7.11.
(a) Suppose P1 and P2 were not, say, rc then if G?R is a gcrd, det G?R would be some polynomial
which must be a factor of both det P1 and det P2 which is a contradiction.
(b) The opposite is not true. See, for example, P and R in Exercise 7.10 which are rc, but
det P = s(s2 + 1) and det R = s which have a common factor s.
Exercise 7.12.
If GL is a gcld of the columns of P , G−1
L y must by a polynomial vector for any y. Therefore
GL must be unimodular; that is it is necessary for the columns of P to be lc.To show
sufficiency,
I
assume that the columns of P are lc and reduce P to its Smith form. Then U1
U2 x = y where
0
I
U1 , U2 are appropriate unimodular matrices and
U2 x = U1−1 y which shows that if a solution
0
exists it will be a polynomial solution.
Exercise 7.13.
(a) The zeros
of |P (q)|
are {−2, 1, −1}.
P (λ)
P (λ)
Since rank
= 2 for λ = 1, −1 and rank
= 1 for λ = −2, the system is
R(λ)
R(λ)
P (λ), Q(λ) = 2 for λ = −2, −1
unobservable;
−2
is
the
unobservable
eigenvalue.
Since
rank
but rank P (λ), Q(λ) = 1 for λ = 1 the system is not controllable; the uncontrollable eigenvalue
is 1.
(b) H(s) = R(s)P −1 (s)Q(s) + W (s). Note that
2
1
1
0
−s2 + 1
2s −s2 + 1
2s + s + 2 2s
−1
= 2
R(s)P (s) =
0
s2 − 1
−s − 2
0
(s + 2)(s2 − 1) s + 2 s(s2 − 1)
s −1
(the factor s + 2 was cancelled);
1
(s − 1)2
R(s)P −1 (s)Q(s) = 2
s2 − 1
s −1
s(s − 1)(−3s − 7)
3s(s2 − 1)
=
1
s+1
−1
0
2
0
s − 1 s(−3s − 7)
s + 1 3s(s + 1)
(the factor s − 1 was cancelled). Then
1
H(s) =
s+1
−2
0
4
0
2
=
s+1
.
(c) An equivalent representation {A, B, C, D} may be determined by column and row operations
on the extended Rosenbrock matrix


I
0
0
 0 P
Q 
0 −R W
See references [18] and [19] of Chapter 10 for detailed descriptions of the algorithm.
Exercise 7.14.
See the solution of Exercise 8.1.
Exercise 7.15.
D(0)
N (0)
(a) The system is controllable but not observable since rank
< 2. A gcrd of (D, N )
2
−1 s + 1
−s s + 1
−1
2
is GR =
and DR = DGR
=
, NR = N G−1
R = [−(s − 1), s + 1]
0
s2
0
s
((DR , NR ) is rc). Since det GR = −s2 there are two unobservable eigenvalues at 0.
65
(b) There is one invariant zero at −1 (from N (s)), which is also a transmission zero (from H(s)
2
s+1
3
or NR (s)). Note that H(s) = [ s s−1
2 , s3 ] with minimal and pole polynomials mH (s) = pH (s) = s .
66
CHAPTER 7.
Chapter 8
Exercise 8.1.


0 0 ··· 1
 .. ..
.. 

. 
(a) The controllability matrix C = [Bc , Ac Bc , . . . , Acn−1 Bc ] =  . .
 (see Exercise
 0 1 ··· × 
1 × ··· ×
6.4) which always has full rank n. So (Ac , Bc ) in controller form is always controllable. Note that
n , degree(d(s)) and according to the algorithm it is the order of the realization in controller form.
(b) If n(s), d(s) are coprime polynomials then the pole polynomial of H(s) is pH (s) = d(s) and
the McMillan degree of H(s) is n = degree(d(s)). Therefore any realization of order n is minimal,
that is (Ac , Cc ) is observable.
Let (Ac , Cc ) be observable, that is let {Ac , Bc , Cc , Dc } be a minimal realization. Assume that
n(s), d(s) are not coprime. Then the McMillan degree of H(s) (after all the cancellations have taken
place) will be less than n = degree(d(s)) and therefore a minimal realization of order less than n
will exist which is impossible since {Ac , Bc , Cc , Dc } is controllable and observable and therefore a
minimal realization of order n. That is, n(s), d(s) must be coprime polynomials.
(c) Let {Ao , Bo , Co , Do } be a realization in observer form. Then it is always observable. It
is controllable if and only if n(s), d(s) are coprime polynomials. The proofs of these results are
completely analogous (dual) to the proofs of (a) and (b) above.
Exercise 8.2.
A controllable realization in controller form is



A=


0 1
0 0
0 0
0 0
1 −1
0 0
1 0
0 1
0 0
1 −1
0
0
0
1
1






,B = 




0
0
0
0
1



 , C = [1, −1, 1, 0, 0]


which however is not observable (rank O = 3). Note that n(s) and d(s) are not coprime polynomials.
In fact, d(s) = (s3 − 1)n(s), that is, H(s) = s31−1 . This means that the McMillan degree of H(s)


 
0 1 0
0
is 3 and a controllable and observable realization, minimal, is A =  0 0 1  , B =  0  , C =
1 0 0
1
[1, 0, 0].
67
68
CHAPTER 8.
Exercise 8.3.
(a), (b) Let H(s) =
2
s −1
= (s(s+1)(s−1)
2 +2)(s−1) = s3 −s2 +2s−2 . Then


 
0 1 0
0
A =  0 0 1  , B =  0  , C = [−1, 0, 1]
2 −2 1
1
s+1
s2 +2
is a controllable but unobservable realization of H(s).



0 0 2
A =  1 0 −2  , B = 
0 1 1
Taking the dual,

−1
0  , C = [0, 0, 1]
1
is an observable but uncontrollable realization of H(s).
0 1
0
(c), (d) ẋ = Ax + Bu, y = Cx with A =
,B =
, C = [1, 1] is a minimal
−2 0
1
b and the resulting realization
realization in controller form. Add to this any state equation ż = Az
A 0
ẋ
x
B
x
=
+
u, y = [C, 0]
b
ż
z
0
z
0 A
is neither controllable nor observable.
Exercise 8.4.
1
s2 −1
(s − 1)2
s2 − 1
1
0
Then
1
is the Smith form of N (s)
0
s2 −1
and therefore the Smith–McMillan form of H(s) is SMH =
with pole polynomial
0
1
H 1 H2
pH = s2 − 1 and McMillan degree equal to 2. The Hankel matrix is MH (2, 2) =
where
H 2 H3
1 0
−2 0
H0 = lims→∞ H(s) =
, H1 = lims→∞ s(H(s) − H0 ) =
.
1 0
0 0
2 1
H2 = lims→∞ s2 (H(s) − H0 − H1 s−1 ) =
, H3 = lims→∞ s3 (H(s) − H0 − H1 s−1 −
0 0
−2 0
H2 s−2 ) =
.
0 0
Therefore rank MH (2, 2) = 2 =
McMillan
degree.
−2
1
e 1 (s)
1 0
H
s+1
s2 −1
e
=
(b),(c) D = lims→∞ H(s) =
. Let H(s)
= H(s) − D =
1 0
0
0
0
h
iT
−2(s
−
1)
e T (s) = −2 , 21
and consider H
=
(s2 − 1)−1 = N (s)D−1 (s). Now D(s) =
1
s+1 s −1
1
1
2 −2
1
2
s − Am
from which Am = [1, 0]. Also N (s) =
= CS(s). Therefore a
s
1 0
s
eT
realization
of H1(s) is 0 1
0
2 −2
e 1 (s) is
A=
,B =
,C =
and a realization of H
1 0 1
1 0
0 1
2 1
,B =
, C = [0, 1]. It is controllable and observable and therefore
A =
1 0
−2 0
minimal.
A minimal realization of H(s) is
0 1
2 1
0 1
1 0
A=
,B =
,C =
,D =
.
1 0
−2 0
0 0
1 0
(a) H(s) =
=
1
d(s) N (s).
s2 − 1
1
69
Note that we could have also used the algorithm in Section 8.4.3 to obtain a minimal realization
with A diagonal.
Exercise 8.5.
0
1
−(s + 1)(s − 5)
s2 − 9 [(s − 1)(s2 − 9)]−1 = N (s)D−1 (s).
5 4 −1
2 T
From N (s) = CS(s) + DD(s) (S(s) = (1, s, s ) ), C =
. From D(s) = Λ(s) −
−9 0 1
Am S(s), Am = (−9, 9, 1) and therefore, a controllable realization is


 
0 1 0
0
5 4 −1
0
A =  0 0 1 ,B =  0 ,C =
,D =
.
−9 0 1
1
−9 9 1
1
D = lims→∞ H(s) =
, H(s) =
It is also observable, and so it is minimal. Note that the McMillan degree of H(s) is 3.
Exercise 8.6.
1
1
− s+1
= 0. So the pole polynomial pH (s) is the least common
(a) The second-order minor is s+1
denominator of all first-order entries, that is, pH (s) = s(s + 1)(s + 3)(= mH (s)) and the McMillan
degree is 3.
1
2
0 1
s
s+1
b
(b) D = lims→∞ H(s) =
and H(s) = H(s) − D =
. In view of (8.67)
1
−1
0 1
s+3
s+1
write
1 1 0
1
1
0 2
0 0
b
H(s) =
+
+
s 0 0
s + 1 0 −1
s+3 1 0
1
1
1
=
R1 +
R2 +
R3 .
s
s+1
s+3
1
2
0
R1 =
[1, 0] = C1 B1 , R2 =
[0, 1] = C2 B2 , R3 =
[1, 0] = C3 B3 . Then a minimal
0
−1
1
realization is




1 0
0 0
0
1 2 0
0 1
,D =
.
A =  0 −1 0  , B =  0 1  , C =
0 −1 1
0 1
1 0
0 0 −3
Exercise 8.7.
(a) H(s) =
where
s2 +1
s3 +6s2 +11s+6 .
A minimal realization in controller form is ẋ = Ax + Bu, y = Cx


 
0
1
0
0
0
1  , B =  0  , C = [1, 0, 1]
A= 0
−6 −11 −6
1
Here u = r − y = r − Cx and so ẋ = Ax + B(r − Cx) = (A − BC)x + Br.
 That is a state-space

0
1
0
0
1  and
description for the closed-loop system is {A − BC, B, C} where A − BC =  0
−7 −11 −7
B, C are as above. It can be verified that it is a minimal realization.
2
H
+1
(b) y = Hu = H(r − y) or y = 1+H
r = s3 +7ss2 +11s+7
r. A minimal realization in controller form is
exactly the same as in (a) above.
70
CHAPTER 8.
The approach in (b) typically is quicker, but it ignores cancelled modes in the composite system;
note that all modes appear in the approach taken in (a). See also Exercise 8.8 and the discussion
on system interconnections in Section 10.2 of Chapter 10.
Exercise 8.8.
(a) (i) A minimal realization of H(s) is ẋ1 = A1 x1 + B1 u, y = C1 x1 where
0 1
0
A1 =
, B1 =
, C1 = [1, 1]
0 −3
1
A minimal realization of G(s) is ẋ2 = A2 x2 + B2 u, y = C2 x2 where A2 = −a, B2 = 1, C2 = k. In
view of u = r − b = r − C2 x2 , ẋ1 = A1 x1 + B1 (r − C2 x2 ) and ẋ2 = A2 x2 + B2 (C1 x1 ) from which
ẋ1
A1
−B1 C2
x1
B1
=
+
r
ẋ2
B2 C1
A2
x2
0
y = [C1 , 0]
x1
x2
That is, a state-space representation of the closed-loop



0 1
0
A =  0 −3 −k  , B = 
1 1 −a
system is ẋ = Ax + Bu, y = Cx where

0
1  , C = [1, 1, 0].
0
(a)(ii) y = Hu = H(r − Gy) from which
y=
(s + 1)(s + a)
s2 + (1 + a)s + a
H
r=
r= 3
r.
1 + HG
s(s + 3)(s + a) + k(s + 1)
s + (3 + a)s2 + (3a + k)s + k
Assuming no cancellations can take place, a minimal realization is

 

0
0
1
0
 , B =  0  , C = [a, 1 + a, 1].
0
1
A= 0
1
−k −(3a + k) −(3 + a)

0 1
−3
(b) In (a)(i) note that the controllability matrix is C =  1 −3 9 − k  and det C = 0 when
0 1 −2

 −a
1
1
0
 and det O = 0 when a = 1
−2
−k
a = 1. Also the observability matrix is O =  0
−k 6 − k k(2 + a)
or k = 0. Note also that in (a)(ii) if a = 1 or k = 0 the factor s + a cancels, which implies that
for these values the controllable realization is not observable. Note that k = 0 implies an open-loop
system. Notice that when a = 1, s + 1 cancels in the product HG (loop gain).

Exercise 8.9.
b b
(a) See the summary of the Grammians in Section 5.5 of Chapter 5. First note that eAt B
=
b
At −1
At
At
−1
At −1
At −1
b
(P e P )(P B) = P e B and Ce = (CP )(P e P ) = Ce P . It is now straightforward
to show the result.
(b)
−1 ∗
∗
∗
cr = P Wr P ∗ = VH (Σ1/2
W
Vr Ur Σr Vr∗ Vr (Σ1/2
r )
r )VH = VH VH = I
71
Observe that because the reachability Grammian is symmetric Vr∗ Ur = I, Vr , VH are unimodular,
and because of the assumption that the system is controllable and observable and thus irreducible
Σr is a nonsingular, diagonal matrix with Σ∗r = Σr .
Similarly,
co
W
=
−1
∗
∗ −1 1/2 −1
(P −1 )∗ Wo P −1 = (VH∗ )−1 Σ1/2
Σr V H
r Vr Uo Σo Vo (Vr )
−1
1/2
∗
∗
= VH Σ1/2
UH UH
(Σo )1/2 Vo∗ Vr Σ1/2
r Vr Uo (Σo )
r VH
∗
= VH H ∗ UH UH
HVH∗
= Σ∗H ΣH = Σ2H
(c), (d) can be proven similarly.
Exercise 8.10.
1
s+1
1
(a) The second-order minor is (s+1)
2 ·
s = s(s+1) and the least common denominator of all
first- and second-order minors is the pole polynomial pH (s) = s2 (s + 1)2 . So the McMillan degree
and the order of any minimal (controllable and
of H(s) is 4.
observable
2) realization
ŷ1
s2
(b) If we consider only the input u2 and
û2 we see that the McMillan degree
= s+1
ŷ2
s
is 2 and so in a 4th order realization {A, B, C, D} the system will not be controllable from u2 (in B
we will take the second column only). Note that it will be observablesinceA and C will not change.
û1
1
2
Similarly, if we consider only the output y1 and ŷ1 = [ (s+1)
, the McMillan degree is
2 , s2 ]
û2
4 and therefore in the minimal realization {A, B, C, D} the system will be observable from y1 (in C
we will take the first row only).
Exercise 8.11.
(a) A minimal state-space realization for H(s) is
AH = 1 + , BH = 1, CH = 1, DH = 0
and for C(s) =
−3
s+2
+ 1, a minimal realization is
AC = −2, BC = 1, CC = −3, DC = 1
A realization of the overall system is
0
1
0
A=
, B=
, C = [−1, 1], D = 0
2 + 2 − 1
1
(note that the overall transfer function is T (s) = H(s)C(s) =
1
s−(1+)
·
s−1
s+2
=
s−1
s2 +(1−)s−2(1+) ).
1
1
· s−1 = s+2
. The realization in (a) is always controllable. Its
s−1 s+2
−1
1
observability matrix is O =
with det O = −3 and O is singular when = 0.
2 + 2 − 2
That is when = 0, (A, C) is not observable. Using the eigenvalue rank test it can be seen that 1
is the unobservable eigenvalue.
Since the eigenvalues of A are {1 + , −2} the system is not asympotically stable. It is BIBO
stable since the pole of T (s) is at −2. Easy to plot.
(c) When 6= 0 the overall system is described by {A, B, C, D} in (a). The system in not
asymptotically stable and so a precompensator with a zero at 1 [as C(s)] cannot be used to stabilize
the system via open-loop because of uncertainties.
(b) When = 0, T (s) =
72
CHAPTER 8.
It is perhaps of interest to study the behavior of the signals. The output of the controller is given
by
û = CC (s − AC )−1 xC (0) + C(s)r̂ =
−3
s−1
xC (0) +
r̂
s+2
s+2
The output of the plant is given by
ŷ = CH (s − AH )−1 xH (0) + H(s)û =
1
1
xH (0) +
û
s − (1 + )
s − (1 + )
Combining, we have
ŷ
=
=
1
−3
s−1
xH (0) +
xC (0) +
r̂
s − (1 + )
(s − (1 + ))(s + 2)
(s − (1 + ))(s + 2)
s−1
(s + 2)xH (0) − 3xC (0)
+
r̂
(s − (1 + ))(s + 2)
(s − (1 + ))(s + 2)
1
H (0)−3xC (0)
Note that even when = 0, ŷ = (s+2)x
+ s+2
r̂, which clearly shows that nonzero initial
(s−(1+))(s+2)
conditions will generate undesirable effects in the output.
Exercise 8.12.
The pole polynomial of H(s) (monic least common denominator of all nonzero minors of H(s))
is pH (s) = s3 and therefore the McMillan degree is 3, equal to the order of any minimal realization
of H(s).
1
1 0 1
0
− 1s
s
b
First note that D = lim H(s) =
and H(s) = H(s) − D =
.
0 0 0
0 s+1
0
s→∞
s2
1
1
−s
s
e
Consider H(s)
, s+1
.
0
s2
2
−1
s
−1
s 0
e
e (s)D
e −1 (s). The pair (N
e , D)
e is rc and there(a) Write H(s) =
=N
s+1 0
0 s
e I, N
e } of H(s)
e
b
fore the controllable realization {D,
is also observable. A minimal realization of H(s)
is given by


1 0 0
s
−1
b (s) = 0
b
N
, D(s)
=  0 s2 0  ;
0 s+1 0
0 0 s
and a minimal realization of H(s) is given by
1
s
b
b
N (s) = N (s) + DD(s) =
0 s+1
s−1
0
b
, D(s) = D(s).
e (s), D(s)
e
(b) From N
in (a), using the Structure Theorem (see Chapter 6) a minimal realization
e
of H(s)
is




0 1 0
0 0
e =  0 0 0 , B
e =  1 0 , C
e = 0 1 −1
A
1 1 0
0 0 0
0 1
b
A minimal realization of H(s)
is


0 0 0
b = A,
e B
b =  0 1 0 , C
b=C
e
A
0 0 1
and a minimal realization of H(s) is
b B = B,
b C = C,
b D=
A = A,
1
0
0
0
1
0
.
Chapter 9
Exercise 9.1.
(a) The plots in Figure 9.1 clearly show the differences in responses when u = F x with F = F1 ,
F = F2 or F = F3 . (Please note the differrent scaling in the three plots).
(b) In view of (9.26) and (9.29) we have
Fi [M1 ai1 , M2 ai2 ] = [D1 ai1 , D2 ai2 ] i = 1, 2, 3
where Mj , Dj are determined from (9.26)
[sj I − A, B]
Mj
−Dj
=0
j = 1, 2
where s1,2 = −0.1025 ± j0.04944. Here
−0.9966
0
0.0687 − i0.0367 0.0846 − i0.0438
M1 =
, D1 =
,
−0.0049 − i0.0002 0.9889 + i0.0619
0.0235 − i0.0125 −0.0846 + i0.0438
and
M2 =
−0.9966
−0.0049 + i0.0002
0
0.9889 − i0.0619
, D2 =
0.0687 + i0.0367 0.0846 + i0.0438
0.0235 + i0.0125 −0.0846 − i0.0438
.
Note that since s1 and s2 are complex conjugate so must be ai1 and ai2 , i = 1, 2.
Given F1 , to determine appropriate ai1 , ai2 recall that the closed-loop eigenvectors (eigenvectors
of A + BF ) are given by
v1 = M1 a11 , v2 = M2 a12
Here v1 = [−0.0473 − i0.9636, i0.2630]T , v2 = [−0.0473 + i0.9636, −i0.2630]T from which
0.0474 + i0.9669
0.0474 − i0.9669
a11 =
, a12 =
.
0.0169 + i0.2697
0.0169 − i0.2697
To verify substitute a11 , a12 in
F [M1 a11 , M2 a12 ] = [D1 a11 , D2 a12 ]
−0.0380 − i0.7758
−0.0380 + i0.7758
Similarly for F2 (here a21 =
, a22 =
) and F3 (here
0.0397 − i0.6338
0.0397 + i0.6338 0.8873 + i0.1346
0.8873 − i0.1346
a31 =
, a32 =
).
0.0326 + i0.4511
0.0326 − i0.4511
73
74
CHAPTER 9.
Figure 9.1: x(t) = [x1 (t), x2 (t)]T for (i) F1 (ii) F2 , and (iii) F3
Exercise 9.2.
(a) It is easy to verify that (A, B) is controllable. Let g = [g1 , g2 ]T and note that for g1 = 0, g2 = 1
(A, Bg) is controllable. Other choices for g are of course possible; they can be determined from the
controllability matrix Cg (so it has full rank) or using eigenvalue/eigenvector tests for controllability.
Now the desired closed-loop characteristic polynomial is
αd (s) = (s2 + 2s + 2)(s2 + 4s + 5) = s4 + 6s3 + 15s2 + 18s + 10
Note that (A, Bg) is in controller form. In view of (9.20)
f
= [−10, −18, −15, −6] − [1, 1, −3, 4]
= [−11, −19, −12, −10]
0
0
0
0
assigns the eigenvalues at −1 ± j and −2 ± j.
Then F = gf =
−11 −19 −12 −10
0 1
1 0
g1
g1 f1
1 + g1 f2
(b) Let A+Bgf =
+
[f1 , f2 ] =
. Then det(sI −
1 1
0 1
g2
1 + g2 f1 1 + g2 f2
A − Bgf ) = (s − g1 f1 )(s − 1 − g2 f2 ) − (1 + g2 f1 )(1 + g1 f2 ) = s2 + 2s + 1. In addition g must be so
that
g1
g2
rank Cg = rank[Bg, ABg] = rank
= 2 or that g12 + g1 g2 + g22 6= 0
g2 g1 + g2
Equating coefficients of equal powers of s we obtain
g1 f1 + g2 f2 + 1 = −2 and g1 f1 (1 + g2 f2 ) − (1 + g2 f1 )(1 + g1 f2 ) = 1
or that g1 f1 +g2 f2 = −3 and g1 f1 −g1 f2 −g2 f1 = 2 which can be solved with respect to g = [g1 , g2 ]T .
75
Exercise 9.3.
Reducing the system to a single-input controllable system is straightforward (take for example
g = [1, 0]T and then use Ackermann’s formula). Solving the nonlinear equations is rather tedious.
Using controller forms,


 1

0 0
4
0 0
4
P =  14 1 0  , P −1 =  −1 1 0 
1
−1 −1 1
1 1
2
reduce A, B to controller form:

Ac = P AP −1
 0


= 9
 ···

1
0
···
8
0
..
.
..
.


0 
0

 1

0  , Bc = P B = 
 ···

···

0
..
. 1

0
0 
.
··· 
1


7
1 0
−4
1
1
0 1  , F1 = Fc1 P =
. See (9.24) in the textbook.
− 25 −1 −1
0 0


9
0 1 0
−4
0
0


0 0 0 , F2 = Fc2 P =
If Ad =
. For arbitrary x(0), it takes as many
− 25 −1 −1
0 0 0
steps to go to the zero state as the dimension of the largest block in the Jordan form of A + BF .
Using F1 , it takes in general 3 steps to drive arbitrary x(0) to the zero state. Using F2 , it takes 2
steps.
0
If Ad =  0
0
Exercise 9.4.
(i) Reducing the system to single-input controllable. See solution of Exercise 9.2(a). For g =
[0, 1]T
0
0
0
0
F = gf =
−11 −19 −12 −10
assigns the eigenvalues at −1 ± j and −2 ± j.
(ii) Using controller forms. (A, B) is already in controller form. The desired closed-loop characteristic polynomial is
αd (s) = (s2 + 2s + 2)(s2 + 4s + 5) = s4 + 6s3 + 15s2 + 18s + 10
In view of (9.24), a choice for F is given by
F
0
1
0
0
=
− Am ] =
−10 −18 −15 −6
0
0
0
0
=
−11 −19 −12 −10
−1
Bm
[Adm
−
0
1
1 0
1 −3
0
4
which is the same as in (i) above.
(iii) Using eigenvectors. Apply (9.29)
(iv) Using nonlinear equations. It is rather tedious.
(v) Using the algorithms for pole assignment of your software package. Check the kind of algorithms used by your software.
76
CHAPTER 9.
Exercise 9.5.
In view of the controllable version of the Structure Theorem (Theorem 6.19 and (6.62) of Chapter
6) we have (sI − Ac )S(s) = Bc d(s) where S(s) = (1, s, . . . , sn−1 )T and d(s) = sn + αn−1 sn−1 + · · · +
α1 s + α0 . Then H(s) = Cc (sI − Ac)−1 Bc + Dc = Cc S(s)d−1 (s) + Dc from which the result follows
directly. Note that H(s) = n(s)/d(s) where d(s) is as above and n(s) = Cc S(s) + Dc d(s). Now, for
the closed-loop transfer function, observe that
(sI − Ac − Bc Fc )S(s) = −Bc Fc S(s) + Bc d(s)
= Bc [d(s) − Fc S(s)] = Bc dF (s)
where dF (s) = d(s) − Fc S(s) = sn + (αn−1 − fn−1 )sn−1 + · · · + (d0 − f0 ). Then
HF (s)
= (Cc + Dc Fc )[sI − (Ac + Bc Fc )]−1 Bc + Dc
= (Cc + Dc Fc )S(s)d−1
F (s) + Dc
from which the result follows directly. Note that H(s) = nF (s)/dF (s) where dF (s) is as above and
nF (s) = [Cc +Dc Fc ]S(s)+Dc dF (s) = [Cc +Dc Fc ]S(s)+Dc (d(s)−Fc S(s)) = Cc S(s)+Dc d(s) = n(s).
Exercise 9.6.
(a) Here H(s) = s32s+1
+s2 −1 . In view of Exercise 9.5, linear state feedback can only lead to the
closed-loop transfer function
HF,G (s) =
s3
+ (1 − f2
)s2
2s + 1
G
+ (−f1 )s + (−1 − f0 )
1
where F = [f0 , f1 , f2 ]. To match the second-order Hm (s) = (s+1)(s+2)
is to take G = 12 and assign
1
1
via F the eigenvalues at −1, −2 and at − 2 so that the zero at − 2 cancels. That is, αd (s) =
(s + 1)(s + 2)(s + 21 ) = s3 + 72 s2 + 72 s + 1 and therefore F = [−2, − 72 , − 25 ].
(b) The closed-loop representation (A + BF, BG, C) is controllable (in controller form) but unobservable (− 12 is the unobservable eigenvalue).
(c) Any full state (full order) observer with, say, all 3 eigenvalues to the left of −10 will suffice.
It is interesting to simulate the system and see the effect of the initial conditions (see Section 9.4).
If error feedback u = C(r − y) is used then y = HC(I + HC)−1 r = Hm r (see Section 9.4 and
s3 +s2 −1
is proper. Note that if C were
Chapter 7) from which C = [(I − Hm )H]−1 Hm = (s2 +3s+1)(2s+1)
not proper (or the resulting system were not internally stable) a more general controller should have
been used.
Exercise 9.7.
Let x1 = x, x2 = v. Let y = v = x2 ,
ẋ1
0
=
ẋ2
−w02
1
0
x1
x2
, y = [0, 1]
x1
x2
.
˙
A
full state full order asymptotic estimator is given by x̂ = (A − KC)x̂ + Ky where A − KC =
0
1 − k1
. Then |sI − (A − KC)| = s(s + k2 ) + w02 (1 − k1 ) = s2 + k2 s + w02 (1 − k1 ) =
−k2
−w02
(s + w0 )2 = s2 + 2w0 s + w02 . Therefore, K = [k1 , k2 ]T = [0, 2w0 ]T .
Exercise 9.8.
ẋ1
0
1
x1
0
x1
=
+
u, y = [0, 1]
. The observer x̂˙ = (A−KC)x̂+Ky +
ẋ2
−w02 0
x2
1
x2
Bu with both eigenvalues at −w0 was designed in Exercise 9.7. It was found that K = [−1, 2w0 ]T .
77
Now to determine F in A + BF : F = Adm − Am = [−2w02 , −2w0 ] − [−w02 , 0] = [−w02 , −2w0 ] since
the desired characteristic polynomial is αd (s) = s2 + 2w0 s + 2w02 and (A, B) is in controller form.
Easy to plot. You may also use (9.108) to see the effect of the initial conditions.
It is worth determining the two input controller matrices of Figure 9.6 of the textbook, using
(9.114) and (9.115). In particular,
−1 s
−1
−1
2
Gy = [−w0 , −2w0 ]
w02 s + 2w0
2w0
and
−1
(1 − Gu )
=
[−w02 , −2w0 ]
s
2w02
−1
s + 4w0
−1 0
1
+
1
0
0
1
.
Also,
−1
(1 − Gu )
Gy
=
[−w02 , −2w0 ]
=
−w02 (3s + 14)
.
(s + w0 )2
s
2w02
−1
s + 4w0
−1 −1
2w0
Exercise 9.9.
(a) Let x1 = θ, x2 = θ̇. Then
ẋ1
0 1
x1
0
=
+
u
ẋ2
0 −1
x2
1
x1
y = [1, 0]
x2
(b) (A, B) is in controller form. F = Adm − Am = [−1,R−2] − [0, −1] = [−1, −1], since αd (s) =
∞
(s + 1)2 = s2 + 2s + 1. To show that this F minimizes J = 0 (x21 + x22 + u2 )dt, consider (9.33) and
2 1
and F ∗ = −R−1 B T Pc∗ = [−1, −1].
note that M = I, Q = I, R = 1. Solving (9.33), Pc∗ =
1 1
(c) (A, C) is observable. From |sI −(A−KC)| = (s+3)2 , K = [5, 4]T . The state-space description
of the overall system is given by (9.97) or (9.99). The system is observable, uncontrollable ({−3, −3}
are the uncontrollable eigenvalues) and asymptotically stable. The transfer function is given by
1
. Note that H(s) = s21+s .
(9.107), namely HF (s) = C[sI − (A + BF )]−1 B = s2 +2s+1
(d) The plots are straightforward. Note the effect of the observer until steady state.
(e) Consider (9.67) where A11 = A21 = 0, A12 = 1, A22 = −1, B1 = 0, B + 2 = 1 to obtain
ẇ = (−1 − K)w + [(−1 − K)K]y + u the reduced order observer; [y, w + Ky]T is the estimate for
the state. Let K = 2. Then ẇ = −3w − 6y + u = −3w − 6x1 + u. The state feedback law is now
u = F [x1 , w + 2x1 ]T + r = −3x1 − w + r = −3y − w + r. The overall system has eigenvalues at
{−1, −1, −3} and it is asymptotically stable. It is observable but uncontrollable (the eigenvalue at
1
−3 is uncontrollable). HF (s) is again s2 +2s+1
. The plots are straightforward.
Exercise 9.10.
(a) Let x̃ = eαt x and ũ = eαt u. Then
x̃˙ = αeαt x + eαt ẋ
= αx̃ + Aeαt x + Beαt u
= (A + αI)x̃ + B ũ
e +B
e ũ
= Ax̃
78
CHAPTER 9.
e +B
e ũ where A
e = (A + αI), B
e = B and
Consider now the system x̃˙ = Ax̃
Z ∞
˜
J(u)
=
e2αt [xT (t)Qx(t) + uT (t)Ru(t)]dt
0
Z ∞
=
[x̃T (t)Qx̃(t) + ũT (t)Rũ(t)]dt
0
e B)
e is controllable (⇔ (A, B) controllable), the optimal feedback control law is
Assuming that (A,
∗
e T Pc x̃(t) or u∗ (t) = −R−1 B
e T Pc x(t) where Pc is the symmetric positivegiven by ũ (t) = −R−1 B
definite solution of the algebraic Ricatti equation
eT Pc + Pc A
e − Pc BR
e −1 B
e T Pc + Q = 0
A
or
AT Pc + Pc A + 2αPc − Pc BR−1 B T Pc + Q = 0
e + BF
e ∗ = (A + BF ∗ ) + αI are stable if and only if the eigenvalues of
(b) The eigenvalues of A
A + BF ∗ are left of −α.
Exercise 9.11.
(i) The solution is derived by solving the Riccati equation (9.33) where M = I, Q = I, R = 1.
Solving, the unique, positive definite solution is
3.7321 6.4641
Pc∗ =
6.4641 13.9282
and Fc∗ = −R−1 B T Pc∗ = [−3.7321, −6, 4641] minimizes J1 .
(ii) Similarly, here M = I, Q = 900I, R = 1. The solution to the Riccati is
32.4305
75.8687
∗
Pc =
75.8687 2, 352.1605
and Fc∗ = [−32.4305, −75.8687] minimizes J2 . In case (ii) the state goes to zero faster but the input
amplitude is larger. Easy to plot.
Exercise 9.12.
(a) Let F = [f1 , f2 ]. Then A+BF =
s2 + s + 21 from which F = 32 , 1 .
0
1 − f1
1
−f2
and |sI −(A+BF )| = s2 +sf2 −(1−f1 ) =
−k1
1
and |sI −(A−KC)| = s2 +sk1 −(1−k2 ) =
1 − k2 0
s2 + 2αs + α2 + 1 from which K = [2α, α2 + 2]T .
(c) The closed-loop description is given by (9.97) or (9.99). The closed-loop transfer function is
(b) Let K = [k1 , k2 ]T . Then A−KC =
HF = C[sI − (A + BF )]−1
B=
−1
.
s2 + s + 1/2
(d) As α increases, the estimate x̂(t) approaches x(t) faster.
Exercise 9.13.
(a) In view of (3.99) of Chapter 3, the effect of q on the output is given by
yq (k) =
k−1
X
j=0
CAk−(j+1) Eq(j) k > 1
79
That is, yq (1) = CEq(0), yq (2) = CAEq(0)
+CEq(1), . . . , yq (k) = CAk−1 Eq(0) + · · · + CEq(k − 1).

C
 CA 


This implies that if E is such that 
 E = OE = 0 then q(k) will not have any effect on the
..


.
n−1
CA
output. Note that because of the Cayley–Hamilton theorem we do not need to go beyond CAn−1 .
That is, the columns of E must be in the null space of O [which is the unobservable subspace of
(A, C)]. This implies for example that if rank E = r, r ≤ n − rank O which is a restriction on the
dimension r of vector q. It is not difficult to see that if q(k) = q a constant vector, OE = 0 is also
necessary and
so it is the
necessary
and sufficient condition.
C
1 1
1
(b) O =
=
from which the column of E must be of the form α
.
CA
2 2
−1
z
z
= n(z)
(c) ŷq (z) = C(zI − A)−1 E z−1
d(z) · z−1 since q(k) is a step. To asymptotically reduce the
effect of q on the output, n(z) must have a zero at +1; also, all eigenvalues of A should lie strictly
inside the unit circle (system internally stable).
Exercise 9.14.
s+1
0
s+1
0
1 0
1 0
s
H(s) = C(sI − A)−1 B + D = 1s
=
×
+
=
1
s+2
1
s+2
1 2
0 1
s
s
−1
s 0
= N (s)D−1 (s). Note that the zeros of the system are at {−1, −2}. Let F = −D−1 C =
0 s
1 0
s+1
0
−
. Then sI − (A + BF ) =
with eigenvalues at {−1, −2}. If v1 =
1 2
1
s+2
1
0
, v2 =
are the corresponding eigenvectors, it is easy to verify that [C+DF ]vi = 0vi = 0,
−1
1
i = 1, 2, that is both eigenvalues are unobservable. Clearly HF = I.
Similar results can be derived selecting the eigenvalue and eigenvectors and using (9.29). In this
case F = W V −1 where (C + DF )V = (C + DW V −1 )V = CV + DW = 0 or W V −1 = −D−1 C = F .
Exercise 9.15.
(a) The response of the system to u(t) = eλt , t ≥ 0 is given by
y(t)
At
Z
t
eA(t−τ ) Bu(τ )dτ
Z t
At
At
= Ce x0 + Ce
e(λI−A)τ Bdτ
= Ce x0 + C
0
0
= CeAt x0 + CeAt (λI − A)−1 B + C(λI − A)−1 Beλt
Let x0 = −(λI − A)−1 B. Then y(t) = H(λ)eλt , t ≥ 0.
If now λ is a zero, one cannot use the above analysis. The input in the frequency domain is
1
. If λ is a zero of H(s), a zero-pole cancellation will occur in ŷ(s) = H(s)û(s) and eλt
û(s) = s−λ
will not appear in the output. This demonstrates the physical meaning of a P
zero (λ-blocking zero).
n
λi t
At
At
, Ai = vi ṽi
(b) The zero-input response of the system is y(t)
=
Ce
x
.
Write
e
=
0
i=1 Ai e
Pn
λi t
[see (3.56), (3.57) of the textbook]. Then y(t) = i=1 Ai e x0 . Let λ be the ith eigenvalue of A
and choose x0 = αvi . Then y(t) = keλt , t ≥ 0.
80
CHAPTER 9.
Chapter 10
Exercise 10.1.
e1 = 1 and D1 = D
e 1 = s2 . These are
(a) Using polynomial matrix descriptions. Here N1 = N
doubly coprime factirizations and we have
#
"
X1
Y1
D1 −Ye1
U U−1 =
e1 D
e1
e1
−N
N1 X
2
0
1
s −1
1 0
=
=
.
−1 s2
1
0
0 1
In view of (10.50) of the textbook, all stabilizing controllers H2 are then given by
H2 =
1 + s2 K
K
where K = nk /dk is any stable rational function.
Using matrix fractional descriptions. In view of Exercise 10.2 we have
2
e10 = s + 8s + 24 , Y10 = Ye10 = 32s + 16 ,
X10 = X
(s + 2)2
(s + 2)2
e10 = N10 =
N
1
s2
e 10 = D10 =
,
D
,
(s + 2)2
(s + 2)2
In view of (10.58) of the textbook, all stabilizing controllers H2 are then given by
H2 =
32s + 16 − s2 K 0
s2 + 8s + 24 − K 0
where K 0 ∈ RH∞ .
(b) In general it is not straightforward to select the parameters in (a) so that the order is desired.
1 s+b0
An alternative way in this simple case would be as follows. Let H2 = bs+a
and establish conditions
0
on the parameters so that the closed-loop system is stable.
Exercise 10.2.
(a) A minimal realization is
0
A=
0
1
0
, B=
0
1
, C=
1, 0
, D = 0.
4
Select F and K so that |sI −(A+BF )| = (s+2)2 = |sI −(A−KC)|. Then F = [−4, −4], K =
4
0
1
−4 1
and A + BF =
, A − KC =
. Relations (10.62) and (10.63) of Lemma 10.14
−4 −4
−4 0
81
82
CHAPTER 10.
now become

 −4

 −4

0 s 
U =  ···


 4

1
0
···
4
−1
0





0 s 
b
U =




0
1
−4
−4
···
···
4
−4
−1
0
..
. 0
..
. 1
..
. ···
..
. 1
..
. 0

4 

4 


··· 


0 

..
. 0
..
. 1
..
. ···
..
. 1
..
. 0

1
4 

4 


··· 


0 

1
Then from (10.64) and (10.65)
0
U =
X0
e0
−N
Y0
e0
D
=
4
−1
1
=
(s + 2)2
"
U
0−1
=
D0
N0
−Ye 0
e0
X
#
=
4
0
s + 4 −1
4
s
s2 + 8s + 24
−1
−4 −4
1
0
1
=
(s + 2)2
s2
1
−1 0
1
4
4
32s + 16
s2
−1 0
1
s −1
4 s+4
−(32s + 16)
s2 + 8s + 24
+
4
4
1
0
+
0
1
1
0
0
1
(b) See Linear Systems by P.J. Antsaklis and A.N. Michel, Birkhaüser 2006, p. 608–610. In
particular, let K, H, Q satisfy KD + HN = QF where Q−1 , Q−1 [K, H] are proper and stable and
F corresponds to some stabilizing state feedback gain matrix in polynomial form. Let DF = D − F .
Then the above equation becomes
[Q−1 (Q − K)][DDF−1 ] + [−Q−1 H][N DF−1 ] = X 0 D0 + Y 0 N 0 = I
where all (0 ) matrices are proper and stable. Here:
KD + HN = [−(4s + 20)]s2 + [−(32s + 16)][1] = (s2 + 4s + 4)(−4s − 4) = QF
and
[Q−1 (Q − K)][DDF−1 ] + [−Q−1 H][N DF−1 ] =
32s + 16
1
s2 + 8s + 24 s2
+
= 1.
2
2
2
(s + 2)
(s + 2)
(s + 2) (s + 2)2
Note that DF = D − F = s2 − (−4s − 4). Here K, H, Q were selected so that Q−1 (Q − K) = X 0 ,
−Q−1 H = Y 0 of (a) above.
83
Exercise 10.3.
(a) A minimal state-space realization (in observer form) is ẋ = Ax + Bu, y = Cx + Du where




0 0 0
0 1
A =  1 0 0  , B =  1 1  , C = 0, 0, 1 , D = [1, 0].
0 1 0
0 0
To parameterize all stabilizing controllers apply Lemma 10.14 and Theorem 10.11.
(b) In this case, the order of the minimal realization is 3. In view of (10.58) of the textbook,
e 0 which is of order 3 in view of
a stabilizing controller H2 for K 0 = 0 is H2 = −X 0 1 −1 Y10 = −Ye10 X
1
(10.64) of the textbook. To determine the eigenvalues of the closed-loop system see Lemma 4.15,
p. 616 in P.J. Antsaklis and A.N. Michel, Linear Systems, Birkhaüser 2006.
Exercise 10.4.
(a) Straightforward. Use Lemma 10.14.
(b) In view of Theorem 10.18, all realizable proper and stable with internal stability, are given
by T = N 0 X 0 where X 0 ∈ RH∞ . When T is diagonal, ni and di must be such that
n1
0
0−1
d1
= X0
N T =
0 nd22
is proper and stable. This implies conditions on ni , di .
(c) In this case (unity feedback case), X 0 from (b) must also satisfy (I + X 0 N 0 )D0−1 ∈ RH∞ for
internal stability, in view of Theorem 10.16 (d). The controller is then given by (10.98)
Gf f = [(I + X 0 N 0 )D0−1 ]−1 X 0
. Since
(I + X 0 N 0 )D0−1 = N 0−1 (I + T )H ∈ RH∞
additional conditions on ni , di are imposed.
Exercise 10.5.
(a) In view of the Hint
N −1 T =
1
1
= −1
(k = 1)
(s + 2)2
G DF
from which
G(s2 + 4s + 4) = DF = D − F S = (2s2 − 3s + 2) − [f1 , f2 ]
1
s
= 2s2 + (−3 − f2 )s + (2 − f1 ).
Then G = 2 and −3 − f2 = 8, 2 − f1 = 8 from which F = [−6, −11] and F (s) = F S(s) = −11s − 6.
(b)
−1
s+1
0
s 0
H=
= N D−1
1
s+2
0 s
s+1
0
s − f11
−f12
= G−1 DF = G−1 [D − F S] = G−1
.
T −1 N =
1
s+2
−f21
s − f22
−1 0
Then G = I, F =
= F (s).
−1 −2
(c)
−1
s+2 s+3
s+1
0
H=
= N D−1
1
0
0
s+2
84
CHAPTER 10.
Here
M =H
−1
1
T =
s+3
and
D
−1
M (= N
−1
T) =
from which
(D − F )
or
s+1
0
0
s+2
(note that S(s) = I) or
s + 1 − f11
−f21
or
−
f11
f21
−2
(s+2)(s+4)
1
s+4
−2
(s+2)(s+4)
1
s+4
f12
f22
−f12
s + 2 − f22
s+1
s+4
−2
(s+2)(s+4)
0
(s + 1)(s + 3)
s+2
−(s + 2)2
−(2 + f12 )s − 2(1 − f11 + f12 )
s2 + (4 − f22 )s − 2f22 + 4 + 2f21
=
−2(s+1)
(s+2)(s+4)
s+2
s+4
#
= DF−1 GK
= GK
−2
s+2
−2
s+2
"
1
= GK
(s + 2)(s + 4)
= GK(s + 2)(s + 4)
= GK(s2 + 6s + 8).
Then GK =
, 4 − f22 = 6, 4 − 2f22 + 2f21 = 8 from which f22 = −2, f21 = 0. Therefore, for
0
0
f11 f12
model matching GK =
(e.g. G = I, K =
) and F =
. Note, for internal
1
1
0 −2
stability f11 , f12 should be chosen so that |DF | = |D − F | = s2 + (5 − f11 )s + 4 − 4f11 is a stable
polynomial (f11 < 1).
0
1
http://www.springer.com/978-0-8176-4460-4
Download