Solutions Manual LINEAR SYSTEM THEORY, 2/E Wilson J. Rugh Department of Electrical and Computer Engineering Johns Hopkins University PREFACE With some lingering ambivalence about the merits of the undertaking, but with a bit more dedication than the first time around, I prepared this Solutions Manual for the second edition of Linear System Theory. Roughly 40% of the exercises are addressed, including all exercises in Chapter 1 and all others used in developments in the text. This coverage complements the 60% of those in an unscientific survey who wanted a solutions manual, and perhaps does not overly upset the 40% who voted no. (The main contention between the two groups involved the inevitable appearance of pirated student copies and the view that an available solution spoils the exercise.) I expect that a number of my solutions could be improved, and that some could be improved using only techniques from the text. Also the press of time and my flagging enthusiasm for text processing impeded the crafting of economical solutions—some solutions may contain too many steps or too many words. However I hope that the error rate in these pages is low and that the value of this manual is greater than the price paid. Please send comments and corrections to the author at rugh@jhu.edu or ECE Department, Johns Hopkins University, Baltimore, MD 21218 USA. CHAPTER 1 Solution 1.1 (a) For k = 2, (A + B)2 = A 2 + AB + BA + B 2 . If AB = BA, then (A + B)2 = A 2 + 2AB + B 2 . In general if AB = BA, then the k-fold product (A + B)k can be written as a sum of terms of the form A j B k−j , j = 0, . . . , k. The k number of terms that can be written as A j B k−j is given by the binomial coefficient . Therefore AB = BA j implies (A + B)k = k Σ j =0 k j k−j AB j (b) Write det [λ I − A (t)] = λn + an−1 (t)λn−1 + . . . + a 1 (t)λ + a 0 (t) where invertibility of A (t) implies a 0 (t) ≠ 0. The Cayley-Hamilton theorem implies A n (t) + an−1 (t)A n−1 (t) + . . . + a 0 (t)I = 0 for all t. Multiplying through by A −1 (t) yields A −1 (t) = . . . − an−1 (t)A n−2 (t) − A n−1 (t) 1 (t)I − _−a ________________________________ a 0 (t) for all t. Since a 0 (t) = det [−A (t)], a 0 (t) = det A (t). Assume ε > 0 is such that det A (t) ≥ ε for all t. Since A (t) ≤ α we have aij (t) ≤ α, and thus there exists a γ such that a j (t) ≤ γ for all t. Then, for all t, a 1 (t)I + . . . + A n−1 (t) ______________________ A −1 (t) = det A (t) + γ α + . . . + αn−1 ∆ _γ________________ =β ≤ ε Solution 1.2 (a) If λ is an eigenvalue of A, then recursive use of Ap = λp shows that λk is an eigenvalue of A k . However to show multiplicities are preserved is more difficult, and apparently requires Jordan form, or at least results on similarity to upper triangular form. (b) If λ is an eigenvalue of invertible A, then λ is nonzero and Ap = λp implies A −1 p = (1/ λ)p. As in (a), addressing preservation of multiplicities is more difficult. T T (c) A T has eigenvalues λ__ 1 , . . . , λ__ n since det (λI − A ) = det (λI − A) = det (λI − A). (d) A H has eigenvalues λ1 , . . . , λn using (c) and the fact that the determinant (sum of products) of a conjugate is the conjugate of the determinant. That is -1- Linear System Theory, 2/E Solutions Manual _ ________ _ _ det (λ I − A H ) = det (λ I − A)H = det (λ I − A) (e) α A has eigenvalues αλ1 , . . . , αλn since Ap = λp implies (α A)p = (αλ)p. (f) Eigenvalues of A T A are not nicely related to eigenvalues of A. Consider the example 0 α 0 0 A= , ATA = 0 0 0 α where the eigenvalues of A are both zero, and the eigenvalues of A T A are 0, α. (If A is symmetric, then (a) applies.) Solution 1.3 (a) If the eigenvalues of A are all zero, then det (λ I − A) = λn and the Cayley-Hamilton theorem shows that A is nilpotent. On the other hand if one eigenvalue, say λ1 is nonzero, let p be a corresponding eigenvector. Then A k p = λ k1 p ≠ 0 for all k ≥ 0, and A cannot be nilpotent. _ (b) Suppose Q is real and symmetric, and λ is an eigenvalue of Q. Then λ also _ _is_ an eigenvalue. From the eigenvalue/eigenvector equation Qp = λ p we get_ p H Qp = λ p H p. Also Qp = λ p, and _ _ transposing gives p H Qp = λ p H p. Subtracting the two results gives (λ − λ)p H p = 0. Since p ≠ 0, this gives λ = λ, that is, λ is real. (c) If A is upper triangular, then λ I − A is upper triangular. Recursive Laplace expansion of the determinant about the first column gives det (λ I − A) = (λ − a 11 ) . . . (λ − ann ) which implies the eigenvalues of A are the diagonal entries a 11 , . . . , ann . Solution 1.4 (a) A= 0 0 1 0 implies A T A = 1 0 0 0 implies A = 1 (b) A= 3 1 1 3 implies A T A = 10 6 6 10 Then det (λI − A T A) = (λ − 16)(λ − 4) which implies A = 4. (c) A= 1−i 0 0 1+i implies A H A = (1+i)(1−i) 0 = 0 (1−i)(1+i) This gives A = √2 . Solution 1.5 Let A= 1/α α , 0 1/α α>1 Then the eigenvalues are 1/α and, using an inequality on text page 7, A ≥ max 1 ≤ i, j ≤ 2 -2- aij = α 2 0 0 2 Linear System Theory, 2/E Solutions Manual Solution 1.6 By definition of the spectral norm, for any α ≠ 0 we can write A x ______ x = 1 x = 1 x αA x A α x _________ ________ = max = max x = 1/α αx α x = 1 α x A = A x = max max Since this holds for any α ≠ 0, A = max x ≠0 A x A x ______ ______ = max x≠0 x x Therefore A x ______ x A ≥ for any x ≠ 0, which gives A x ≤ A x Solution 1.7 By definition of the spectral norm, AB = max x =1 ≤ max x =1 (AB)x = max x =1 A (Bx) {A Bx } , by Exercise 1.6 = A max x Bx = A B =1 If A is invertible, then A A −1 = I and the obvious I = 1 give 1 = A A −1 ≤ A A −1 Therefore 1 _____ A A −1 ≥ Solution 1.8 We use the following easily verified facts about partitioned vectors: x1 x2 ≥ x 1 , x 2 ; x1 0 = x 1 , 0 x2 = x 2 Write Ax = A 11 A 12 A 21 A 22 x1 x2 = A 11 x 1 + A 12 x 2 A 21 x 1 + A 22 x 2 Then for A 11 , for example, A = max x =1 ≥ max x 1 =1 A x ≥ max x =1 A 11 x 1 + A 12 x 2 A 11 x 1 = A 11 The other partitions are handled similarly. The last part is easy from the definition of induced norm. For example if -3- Linear System Theory, 2/E Solutions Manual 0 A 12 0 0 A= then partitioning the vector x similarly we see that max x =1 A x = max x 2 =1 A 12 x 2 = A 12 Solution 1.9 By the Cauchy-Schwarz inequality, and x T = x , x T A x ≤ x T A x = A T x x ≤ A T x 2 = A x 2 This immediately gives x T A x ≥ −A x 2 If λ is an eigenvalue of A and x is a corresponding unity-norm eigenvector, then λ = λx = λ x = A x ≤ A x = A Solution 1.10 Since Q = Q T , Q T Q = Q 2 , and the eigenvalues of Q 2 are λ 21 , . . . , λ 2n . Therefore Q = 2 (Q ) √λ max = max 1≤i≤n λi For the other equality Cauchy-Schwarz gives x T Qx | ≤ x T Q x = Qx x ≤ Q x 2 = [ max λi ] 1≤i≤n x Tx Therefore | x T Qx | ≤ Q for all unity-norm x. Choosing xa as a unity-norm eigenvector of Q corresponding to the eigenvalue that yields max λi gives 1≤i≤n x Ta Qxa = x Ta Thus max x =1 [ max 1≤i≤n λi ] xa = max 1≤i≤n λi x T Qx = Q . x) T (A x) = √x TA TA x , Solution 1.11 Since A x = √(A A = = max √xTA TA x x =1 max x T A T A x x =1 1/2 The Rayleigh-Ritz inequality gives, for all unity-norm x, x T A T A x ≤ λmax (A T A) x T x = λmax (A T A) and since A T A ≥ 0, λmax (A T A) ≥ 0. Choosing xa to be a unity-norm eigenvector corresponding to λmax (A T A) gives x Ta A T A xa = λmax (A T A) Thus -4- Linear System Theory, 2/E Solutions Manual max x T A T A x = λmax (A T A) x =1 T (A A) so we have A = √λmax . Solution 1.12 Since A T A > 0 we have λi (A T A) > 0, i = 1, . . . , n, and (A T A)−1 > 0. Then by Exercise 1.11, A −1 2 = λmax ((A T A)−1 ) = n 1 _________ λmin (A T A) Π λi (A T A) n −1 T max (A A)] i =1 _[λ____________ __________________ ≤ = (det A)2 λmin (A T A) . det (A T A) = A 2(n−1) _________ (det A)2 Therefore A −1 ≤ A n−1 ________ det A Solution 1.13 Assume A ≠ 0, for the zero case is trivial. For any unity-norm x and y, y T A x ≤ y T A x ≤ y A x = A Therefore max x , y =1 y T A x ≤ A Now let unity-norm xa be such that A xa = A , and let ya = Axa _____ A Then ya = 1 and y Ta A xa = A xa 2 x Ta A T A xa A 2 ______ ________ __________ = A = = A A A Therefore max x , y =1 y T A x = A Solution 1.14 The coefficients of the characteristic polynomial of a matrix are continuous functions of matrix entries, since determinant is a continuous function of the entries (sum of products). Also the roots of a polynomial are continuous functions of the coefficients. (A proof is given in Appendix A.4 of E.D. Sontag, Mathematical Control Theory, Springer-Verlag, New York, 1990.) Since a composition of continuous functions is a continuous function, the pointwise-in-t eigenvalues of A (t) are continuous in t. This argument gives that the (nonnegative) eigenvalues of A T (t)A (t) are continuous in t. Then the maximum at each t is continuous in t — plot two eigenvalues and consider their pointwise maximum to see this. Finally since square root is a continuous function of nonnegative arguments, we conclude A (t) is continuous in t. However for continuously-differentiable A (t), A (t) need not be continuously differentiable in t. Consider the -5- Linear System Theory, 2/E Solutions Manual example A (t) = t 0 0 t2 , A (t) = t , 0≤t ≤1 t2 , 1 < t < ∞ Clearly the time derivative of A (t) is discontinuous at t = 1. (This overlaps Exercise 1.18 a bit.) Also the eigenvalues of continuously-differentiable A (t) are not necessarily continuously differentiable, consider 0 1 A (t) = −1 −t An easy computation gives the eigenvalues λ(t) = t 2 − 4 t _√ ______ __ ± 2 2 Thus . λ(t) = t 1 ________ __ ± 2 2 2 √t − 4 and this function is not continuous at t = 2. Solution 1.15 Clearly Q is positive definite, and by Rayleigh-Ritz if x ≠ 0, 0 < λmin (Q) x T x ≤ x T Q x ≤ λmax (Q) x T x Choosing x as an eigenvector corresponding to λmin (Q) (respectively, λmax (Q)) shows that these inequalities are tight. Thus ε1 ≤ λmin (Q) , λmax (Q) ≤ ε2 Therefore λmin (Q −1 ) = 1 1 ___ _ ______ ≥ ε2 λmax (Q) λmax (Q −1 ) = 1 1 ___ _______ ≤ ε1 λmin (Q) Thus Rayleigh-Ritz for the positive definite matrix Q −1 gives 1 1 ___ ___ I I ≤ Q −1 ≤ ε1 ε2 Solution 1.16 If W (t) − ε I is symmetric and positive semidefinite for all t, then for any x, x T W (t) x ≥ ε x T x for all t. At any value of t, let xt be an eigenvector corresponding to an eigenvalue (necessarily real) λt of W (t). Then x Tt W (t) xt = λt x Tt xt ≥ ε x Tt xt That is λt ≥ ε. This holds for any eigenvalue of W (t) and every t. Since the determinant is the product of eigenvalues, det W (t) ≥ εn > 0 for any t. -6- Linear System Theory, 2/E Solutions Manual Solution 1.17 Using the product rule to differentiate A (t) A −1 (t) = I yields . _d_ A −1 (t) = 0 A (t) A −1 (t) + A (t) dt which gives _d_ A −1 (t) = −A −1 (t) A. (t) A −1 (t) dt Solution 1.18 Assuming differentiability of both x (t) and x (t), and using the chain rule for scalar functions, _d_ dt _d_ dt _d_ = 2x (t) dt x (t)2 = 2x (t) x (t) x (t) Also we can write, using the product rule and the Cauchy-Schwarz inequality, _d_ x (t)2 = _d_ x T (t) x (t) = x. T (t) x (t) + x T (t) x. (t) = 2x T (t) x. (t) dt dt . ≤ 2x (t)x (t) For t such that x (t) ≠ 0, comparing these expressions gives . _d_ dt x (t) ≤ x (t) If x (t) = 0 on a closed interval, then on that interval the result is trivial. If x (t) = 0 at an isolated point, then continuity arguments show that the result is valid. Note that for the differentiable function x (t) = t, x (t) = t is not differentiable at t = 0. Thus we must make the assumption that x (t) is differentiable. (While this inequality is not explicitly used in the book, the added differentiability hypothesis explains why we always differentiate x (t)2 = x T (t) x (t) instead of x (t).) Solution 1.19 To prove the contrapositive claim, suppose for each i, j there is a constant βij such that t ∫ fij (σ) d σ ≤ βij , t ≥0 0 Then by the inequality on page 7, noting that max fij (t) is a continuous function of t and taking the pointwisei, j in-t maximum, t t 0 0 fij (σ) d σ ∫ F (σ) d σ ≤ ∫ √mn max i, j ≤ √mn t m n ∫Σ Σ | fij (σ) d σ 0 i =1 j =1 ≤ √mn n m Σ Σ βij < ∞ , i =1 j =1 k The argument for Σ F ( j) is similar. j =0 -7- t ≥0 Linear System Theory, 2/E Solutions Manual Solution 1.20 If λ(t), p (t) are a pointwise-in-t eigenvalue/eigenvector pair for A −1 (t), then A −1 (t) p (t) = λ(t) p (t) = λ(t)p (t) Therefore, for every t, λ(t) = A −1 (t)p (t) A −1 (t) p (t) _______________ _____________ ≤α ≤ p (t) p (t) Since this holds for any eigenvalue/eigenvector pair, det A (t) = 1 1 1 ___ _________________ ___________ ≥ n >0 = λ1 (t) . . . λn (t) α det A −1 (t) for all t. Solution 1.21 Using Exercise 1.10 and the assumptions Q (t) ≥ 0, tb ≥ ta , tb tb tb tb ta ta ta ta ∫ Q (σ) d σ = ∫ λmax [Q (σ)] d σ ≤ ∫ tr [Q (σ)] d σ = tr ∫ Q (σ) d σ Note that tb ∫ Q (σ ) d σ ≥ 0 ta since for every x x T tb tb ta ta ∫ Q (σ ) d σ x = ∫ x T Q (σ ) x d σ ≥ 0 Thus, using a property of the trace on page 8 of Chapter 1, we have tb tb tb ta ta ta ∫ Q (σ) d σ ≤ tr ∫ Q (σ) d σ ≤ n ∫ Q (σ) d σ Finally, tb ∫ Q (σ ) d σ ≤ ε I ta implies, using Rayleigh-Ritz, tb ∫ Q (σ) d σ ≤ ε ta Therefore tb ∫ Q (σ) d σ ≤ n ε ta -8- CHAPTER 2 Solution 2.3 . The nominal solution for ũ(t) = sin (3t) is ỹ(t) = sin t. Let x 1 (t) = y (t), x 2 (t) = y (t) to write the state equation . x (t) = x 2 (t) −(4/ 3)x 31 (t) − (1/ 3)u (t) Computing the Jacobians and evaluating gives the linearized state equation . x δ (t) = 0 1 x (t) + −4 sin2 t 0 δ 0 u (t) −1/ 3 δ y δ (t) = 1 0 x δ (t) where x δ (t) = x (t) − sin t cos t , u δ (t) = u (t) − sin (3t) , y δ (t) = y (t) − sin t , x δ (0) = x (0) − 0 1 Solution 2.5 For ũ = 0 constant nominal solutions are solutions of 0 = x̃ 2 − 2x̃ 1 x̃ 2 = x̃ 2 (1−2x̃ 1 ) 2 2 2 0 = −x̃ 1 + x̃ 1 + x̃ 2 = x̃ 1 (x̃ 1 −1) + x̃ 2 Evidently there are 4 possible solutions: 0 x̃a = , x̃b = 0 1 , 0 x̃c = 1/ 2 , 1/ 2 x̃d = 1/ 2 −1/ 2 Since _∂f __ = ∂x −2x 2 1−2x 1 −1+2x 1 2x 2 , ∂f ___ = ∂u 0 1 evaluating at each of the constant nominals gives the corresponding 4 linearized state equations. Solution 2.7 Clearly x̃ is a constant nominal if and only if 0 = A x̃ + bũ that is, if and only if A x̃ = −bũ. There exists such an x̃ if and only if b ∈ Im [A ], in other words -9- rank A = rank [ A b ]. Also, x̃ is a constant nominal with c x̃ = 0 if and only if 0 = A x̃ + bũ 0 = c x̃ that is, if and only if A x̃ = c −bũ 0 As above, this holds if and only if rank A = rank c A b c 0 Finally, x̃ is a constant nominal with c x̃ = ũ if and only if 0 = A x̃ + bũ = ( A + bc ) x̃ and this holds if and only if x̃ ∈ Ker [ A + bc ] (If A is invertible, we can be more explicit. For any ũ the unique constant nominal is x̃ = −A −1 bũ. Then ỹ = 0 for ũ ≠ 0 if and only if c A −1 b = 0, and ỹ = ũ if and only if c A −1 b = −1.) Solution 2.8 (a) Since A B C 0 is invertible, for any K A + BK B = C 0 A B C 0 I 0 K I is invertible. Let A + BK B C 0 R1 R2 R3 R 4 = −1 I 0 0 I Then the 1, 2-block gives R 2 = −(A + BK) BR 4 and the 2, 2-block gives CR 2 = I, that is, I = −C(A + BK)−1 BR 4 Thus [ C (A + BK)−1 B ]−1 exists and is given by −R 4 . (b) We need to show that there exists N such that 0 = (A + BK)x̃ + BNũ ũ = Cx̃ The first equation gives x̃ = −(A + BK)−1 BN ũ Thus we need to choose N such that −C (A + BK)−1 BN ũ = ũ From part (a) we take N = [−C (A + BK)−1 B ]−1 = R 4 . -10- Linear System Theory, 2/E Solutions Manual Solution 2.10 For u (t) = ũ, x̃ is a constant nominal if and only if 0 = (A + Dũ) x̃ + bũ This holds if and only if bũ ∈ Im [ A + Dũ], that is, if and only if rank ( A + Dũ ) = rank A +Dũ bũ If A + Dũ is invertible, then x̃ = −(A + Dũ)−1 bũ (+) If A is invertible, then by continuity of the determinant det (A + Dũ) ≠ 0 for all ũ such that ũ is sufficiently small, and (+) defines a corresponding constant nominal. The corresponding linearized state equation is . x δ (t) = (A + Dũ) x δ (t) + [ b − D (A + Dũ)−1 bũ ] u δ (t) y δ (t) = C x δ (t) Solution 2.12 For the given nominal input, nominal output, and nominal initial state, the nominal solution satisfies . x̃ (t) = 1 x̃ 1 (t) − x̃ 3 (t) , x̃(0) = x̃ 2 (t) − 2 x̃ 3 (t) 0 −3 −2 1 = x̃ 2 (t) − 2 x̃ 3 (t) Integrating for x̃ 1 (t) and then x̃ 3 (t) easily gives the nominal solution x̃ 1 (t) = t, x̃ 2 (t) = 2 t − 3, and x̃ 3 (t) = t − 2. The corresponding linearized state equation is specified by 0 0 0 0 A = 1 0 −1 , B (t)= t , C = 0 1 −2 0 0 1 −2 It is unusual that the nominal input and nominal output are constants, but the linearization is time varying. Solution 2.14 Compute . . . . z (t) = x (t) − q (t) = A x (t) + Bu (t) + A −1 Bu (t) . = A x (t) − A[−A −1 Bu (t)] + A −1 Bu (t) . = A z (t) + A −1 Bu (t) . If at any value of ta > 0 we have x (ta ) = q (ta ), that is z (ta ) = 0, and u (t) = 0 for t ≥ ta , that is u (t) = u (ta ) for t ≥ ta , then z (t) = 0 for t ≥ ta . Thus x (t) = q (ta ) for t ≥ ta , and q (t) represents what could be called an ‘instantaneous constant nominal.’ -11- CHAPTER 3 Solution 3.2 Differentiating term k +1 of the Peano-Baker series using Leibniz rule gives σ1 t ∂ ___ ∂τ σ2 σk ∫τ A (σ1 ) ∫τ A (σ2 ) ∫τ ... ∫τ A (σk +1 ) d σk +1 . . . d σ1 t = A (t) ∫ A (σ2 ) σ2 ∫τ τ σk ∫τ A (σk +1 ) d σk +1 ... d ___ t− dτ . . . d σ2 σ2 σ1 t ∂ ___ + ∫ A (σ 1 ) ∂τ τ ∫τ A (σ2 ) ∫τ σk ∂ ___ = ∫ A (σ 1 ) ∂ τ τ τ τ d σk +1 . . . d σ1 σ2 σ1 t τ ∫τ A (σk +1 ) ... τ A (τ) ∫ A (σ2 ) ∫ . . . d σk +1 . . . d σ2 ∫τ A (σ2 ) ∫τ σk ... ∫τ A (σk +1 ) d σk +1 . . . d σ1 Repeating this process k times gives t ∂ ___ ∂τ σ1 σ2 ∫τ A (σ1 ) ∫τ A (σ2 ) ∫τ σk ... ∫τ A (σk +1 ) d σk +1 . . . d σ1 t = ∫ A (σ 1 ) σ1 τ ∫τ t σ1 = ∫ A (σ 1 ) τ ∫τ t σ1 = ∫ A (σ 1 ) τ ∫τ σk−1 ∫τ ... A (σ k ) ∂ ___ ∂τ σk ∫τ A (σk +1 ) d σk +1 σk−1 ... ∫τ σk σ2 A (σ 2 ) 0 − A (τ) + A (σ k ) ∫τ ∫τ 0 d σk +1 d σk . . . d σ1 d σk . . . d σ1 σk−1 ... ∫τ A (σ k ) d σ k . . . d σ 1 − A (τ) Recognizing this as term k of the uniformly convergent series for −Φ(t, τ) A (τ) gives ∂ ___ Φ(t, τ) = −Φ(t, τ) A (τ) ∂τ (Of course it is simpler to use the formula for the derivative of an inverse matrix given in Exercise 1.17.) -12- d ___ τ dτ Linear System Theory, 2/E Solutions Manual Solution 3.6 Writing the state equation as a pair of scalar equations, the first one is . −t ______ x 1 (t) x 1 (t) = 1 + t2 and an easy computation gives x 1o _________ (1 + t 2 )1/2 x 1 (t) = Then the second scalar equation then becomes x 1o . −4t _________ ______ x 2 (t) + x 2 (t) = 2 (1 + t 2 )1/2 1+t The complete solution formula gives, with some help from Mathematica, t . (1 + σ2 )3/2 1 _________ ________ d σ x 1o x 2o + ∫ x 2 (t) = 2 2 2 2 (1 + t ) 0 (1 + t ) = 2 (t 3 /4+5t/ 8)+(3/ 8) sinh−1 (t) 1+t 1 _√ ____________________________ ________ x 1o x + 2o (1 + t 2 )2 (1 + t 2 )2 If x 1o = 1, then as t →∞, x 2 (t) → 1/ 4, not zero. Solution 3.7 From the hint, letting t r (t) = ∫ v (σ)φ(σ) d σ to . we have r (t) = v (t)φ(t), and φ(t) ≤ ψ(t) + r (t) (*) Multiplying (*) through by the nonnegative v (t) gives v (t)φ(t) ≤ v (t)ψ(t) + v (t)r (t) or . r (t) − v (t)r (t) ≤ v (t)ψ(t) Multiply both sides by the positive quantity t − ∫ v ( τ) d τ e to to obtain t _d_ dt − ∫ v ( τ) d τ r (t)e t to ≤ v (t)ψ(t)e − ∫ v ( τ) d τ to Integrating both sides from to to t, and using r (to ) = 0 gives σ t − ∫ v ( τ) d τ r (t)e to t ≤ ∫ v (σ)ψ(σ)e to Multiplying through by the positive quantity -13- − ∫ v ( τ) d τ to dσ Linear System Theory, 2/E Solutions Manual t ∫ v ( τ) d τ t eo gives t ∫ v ( τ) d τ t r (t) ≤ ∫ v (σ)ψ(σ)e σ dσ to and using (*) yields the desired inequality. Solution 3.10 Multiply the state equation by 2 z T (t) to obtain . _d_ 2 z T (t) z (t) = dt z (t)2 n n n n = Σ Σ 2 zi (t)aij (t) zj (t) i =1 j =1 ≤ Σ Σ 2aij (t)zi (t)zj (t) , i =1 j =1 t ≥ to At each t ≥ to let a (t) = 2n 2 max 1 ≤ i, j ≤ n aij (t) Note a (t) is a continuous function of t, as a quick sample sketch indicates. Then, since zi (t) ≤ z (t), _d_ dt z (t)2 ≤ a (t)z (t)2 , t ≤ to Multiplying through by the positive quantity t − ∫ a (σ) d σ e gives to t − ∫ a (σ) d σ _d_ dt e to z (t)2 ≤ 0 , t ≤ to Integrating both sides from to to t and using z (to ) = 0 gives , t ≥ to z (t) = 0 which implies z (t) = 0 for t ≥ to . Solution 3.11 The vector function x (t) satisfies the given state equation if and only if it satisfies t t τ t to to to to x (t) = xo + ∫ A (σ) x(σ) d σ + ∫ ∫ E (τ, σ) x(σ) d σd τ + ∫ B (σ)u (σ) d σ Assuming there are two solutions, their difference z (t) satisfies t t τ to to to z (t) = ∫ A (σ) z(σ) d σ + ∫ ∫ E (τ, σ) z(σ) d σd τ Interchanging the order of integration in the double integral (Dirichlet’s formula) gives -14- Linear System Theory, 2/E Solutions Manual t t t z (t) = ∫ A (σ) z(σ) d σ + ∫ ∫ E (τ, σ) d τ z(σ) d σ to σ to t t ∫ = to A (σ) + ∫ E (τ, σ) d τ z(σ) d σ σ t ∆ = ∫ Â(t, σ) z (σ) d σ to Thus t t to to z (t) = ∫ Â(t, σ) z (σ) d σ ≤ ∫ Â(t, σ)z (σ) d σ By continuity, given T > 0 there exists a finite constant α such that Â(t, σ) ≤ α for to ≤ σ ≤ t ≤ to + T. Thus t z (t) ≤ ∫ α z (t) d σ , t ∈ [to , to +T ] to and the Gronwall-Bellman inequality gives than one solution. 0 for t ∈ [to , to +T ], implying that there can be no more z (t) = Solution 3.13 From the Peano-Baker series, t t I + ∫ A (σ 1 ) d σ 1 + . . . + ∫ A (σ 1 ) Φ(t, τ) − τ τ σ1 ∫τ σk−1 ... ∫τ A (σ k ) d σ k . . . d σ 1 ∞ = Σ t ∫ A (σ 1 ) j =k +1 τ σ1 ∫τ σ j−1 ... ∫τ A (σ j ) d σ j . . . d σ 1 For any fixed T > 0 there is a finite constant α such that A (t) ≤ α for t ∈ [−T, T ], by continuity. Therefore ∞ Σ t ∫ A (σ 1 ) j =k +1 τ σ1 ∫τ σ j−1 ... ∫τ A (σ j ) d σ j . . . d σ1 ≤ ∞ Σ ∞ t j =k +1 ≤ t ∫ A (σ 1 ) τ σ1 σ j−1 ∫τ ... σ1 Σ ∫ A (σ1 ) ∫ j =k +1 τ τ ∫τ A (σ j ) d σ j . . . d σ1 σ j−1 ... ∫τ A (σ j ) d σ j . . . d σ1 . . . ≤ ∞ Σ j =k +1 ∞ α ∫ αj Σ j =k +1 ≤ Σ j =k +1 We need to show that given ε > 0 there exists K such that -15- ... τ ≤ ∞ σ j−1 t j ∫τ 1 d σ j . . . d σ1 t − τ j _______ j! (α2T) j ______ , t, τ ∈ [−T, T ] j! Linear System Theory, 2/E Solutions Manual ∞ Σ j =K +1 2T) j _(α_____ <ε j! (*) Using the hint, ∞ Σ j =k +1 ∞ ∞ (α2T)i (α2T)k +1 . ______ (α2T)k +1+i 2T) j ________ __________ _(α_____ ≤Σ =Σ j! ki i =0 (k +1)! i =0 (k +1+i)! If k > α2T, then ∞ Σ j =k +1 (α2T)k +1 1 (α2T)k +1 . _ _______ 2T) j ________ __________________ _(α_____ = ≤ (k−1)!(k +1)(k−α2T) 1 − α2T/k (k +1)! j! Because of the factorial in the denominator, given ε > 0 there exists a K > α2T such that (*) holds. Solution 3.15 Writing the complete solution of the state equation at t f , we need to satisfy tf Ho xo + H f Φ(t f , to ) xo + ∫ Φ(t f , σ)f (σ) d σ = h to (+) Thus there exists a solution that satisfies the boundary conditions if and only if tf h − Hf ∫ Φ(t f , σ)f (σ) d σ ∈ Im[ Ho + H f Φ(t f , to ) ] to There exists a unique solution that satisfies the boundary conditions if Ho + H f Φ(t f , to ) is invertible. To compute a solution x (t) satisfying the boundary conditions: (1) Compute Φ(t, to ) for t ∈ [to , t f ] (2) Compute Ho + H f Φ(t f , to ) tf (3) Compute ∫ Φ(t f , σ)f (σ) d σ to (4) Solve (+) for xo t (5) Set x (t) = Φ(t, to ) xo + ∫ Φ(t, σ)f (σ) d σ, t ∈ [to , t f ] to -16- CHAPTER 4 . Solution 4.1 An easy way to compute A (t) is to use A (t) = Φ(t, 0)Φ(0, t). This gives A (t) = −2t −1 1 −2t This A (t) commutes with its integral, so we can write Φ(t, τ) as the matrix exponential t Φ(t, τ) = exp ∫τ A (σ) d σ = exp −(t−τ)2 −(t−τ) (t−τ) −(t−τ)2 Solution 4.4 A linear state equation corresponding to the n th -order differential equation is . x (t) = ... 0 ... 0 . . . . x (t) . . ... 1 −a 0 (t) −a 1 (t) . . . −an−1 (t) 0 0 . . . 0 1 0 . . . 0 The corresponding adjoint state equation is . z (t) = 0 −1 . . . 0 0 ... ... . . . ... ... 0 0 . . . 0 −1 a 0 (t) a 1 (t) . . z (t) . an−2 (t) an−1 (t) th To put this in the form of an n -order differential equation, start with . zn (t) = −zn−1 (t) + an−1 (t) zn (t) . zn−1 (t) = −zn−2 (t) + an−2 (t) zn (t) These give .. . _d_ [ a (t) z (t) ] zn (t) = −zn−1 (t) + n−1 n dt _d_ [ a (t) z (t) ] = zn−2 (t) − an−2 (t) zn (t) + n−1 n dt Next, -17- Linear System Theory, 2/E Solutions Manual . zn−2 (t) = −zn−3 (t) + an−3 (t) zn (t) gives . d2 d3 _d_ [ a (t) z (t) ] + ____ ____ [ an−1 (t) zn (t) ] z (t) = z (t) − n−2 n n n−2 dt dt 2 dt 3 d2 _d_ [ a (t) z (t) ] + ____ [ an−1 (t) zn (t) ] = −zn−3 (t) + an−3 (t) zn (t) − n−2 n dt dt 2 Continuing gives the n th -order differential equation d n−2 d n−1 dn _____ _____ ____ [ an−2 (t) zn (t) ] (t) z (t) ] − [ a z (t) = n−1 n n dt n−2 dt n−1 dt n _d_ [ a (t) z (t) ] + (−1)n +1 a (t) z (t) + . . . + (−1)n 1 n 0 n dt Solution 4.6 For the first matrix differential equation, write the transpose of the equation as (transpose and differentiation commute) .T X (t) = A T (t)X T (t) , X T (to ) = X To This has the unique solution X T (t) = ΦA T (t) (t, to )X To , so that X (t) = Xo Φ AT T (t) (t, to ) In the second matrix differential equation, let Φk (t, τ) be the transition matrix for Ak (t), k = 1, 2. Then it is easy to verify (Leibniz rule) that a solution is t to )Xo Φ T2 (t, to ) X (t) = Φ1 (t, + ∫ Φ1 (t, σ)F (σ)Φ T2 (t, σ) d σ to Or, one can generate this expression by using the obvious integrating factors on the left and right sides of the differential. equation. (To show this is a unique solution, show that the difference Z (t) between any two solutions satisfies Z (t) = A 1 (t)Z (t) + Z (t)A T2 (t), with Z (to ) = 0. Integrate both sides and apply the Bellman-Gronwall inequality to show Z (t) is identically zero.) Solution 4.9 Clearly A (t) commutes with its integral. Thus we compute exp 0 1 τ −1 0 t and then replace τ by ∫ a (σ) d σ. From the power series for the exponential, 0 exp 0 1 τ = −1 0 ∞ Σ k =0 ∞ = Σ k =0 = Σ k =0 ∞ 1 ___ k! 1 _____ (2k)! 1 _____ (2k)! 0 1k k τ −1 0 0 1 2k 2k τ + −1 0 (−1)k 0 0 (−1)k -18- ∞ Σ k =0 1 _ ______ (2k +1)! τ 2k + ∞ Σ k =0 0 1 2k +1 2k +1 τ −1 0 1 _ ______ (2k +1)! 0 (−1)k k +1 (−1) 0 τ 2k +1 Linear System Theory, 2/E Solutions Manual = = cos τ 0 + 0 cos τ cos τ sin τ −sin τ cos τ 0 sin τ −sin τ 0 Replacing τ as noted above gives Φ(t, 0). For sufficiency, suppose Φx (t, 0) = T (t)e Rt . Then T (0) = I and T (t) is continuously differentiable. Let z (t) = T −1 (t) x (t) so that Solution 4.10 Φz (t, 0) = T −1 (t)Φx (t, 0)T (0) = T −1 (t)T (t)e Rt = e Rt . Thus z (t) = R z (t). For necessity, suppose P (t) is a variable change that gives . z (t) = Ra z (t) Then Φz (t, 0) = e Ra t = P −1 (t)Φx (t, 0)P (0) that is, Φx (t, 0) = P (t)e Ra t P −1 (0) Let T (t) = P (t)P −1 (0) and R = P (0)Ra P −1 (0). Then Φx (t, 0) = T (t)P (0) e P −1 (0)RP (0)t P −1 (0) = T (t)P (0) [ P −1 (0)e Rt P (0) ] P −1 (0) = T (t)e Rt Solution 4.11 Suppose Φ(t, 0) = e A1t e A2t =e A1t Then . _d_ Φ(t, 0) = dt =e This implies A (t) = e A1t A1t e A1t e A2t ( A 1 +A 2 ) e A 2 t ( A 1 +A 2 ) e −A 1 t . e A 1 t e A 2 t −A t [ A 1 +A 2 ] e 1 . Therefore A (0) = A 1 +A 2 is clear, and . A t −A t A t −A t A (t) = A 1 e 1 ( A 1 +A 2 ) e 1 + e 1 ( A 1 +A 2 ) e 1 (−A 1 ) = A 1 A (t) − A (t) A 1 Conversely, assume A 1 and A 2 are such that . A (t) = A 1 A (t) − A (t) A 1 , A (0) = A 1 + A 2 This matrix differential equation has a unique solution (by rewriting it as a linear vector differential equation), and from the calculation above this solution is A (t) = e A1t ( A 1 + A 2 ) e −A 1 t Since -19- Linear System Theory, 2/E Solutions Manual _d_ dt we have that Φ(t, 0) = e A1t A2t e e A1t e A2t = A (t)e A1t A2t e , e A10 A20 e =I . Solution 4.13 Writing _∂_ Φ (t, τ) = A (t)Φ (t, τ) , Φ(τ, τ) = I A A ∂t in partitioned form shows that _∂_ Φ (t, τ) = A (t)Φ (t, τ) , Φ (τ, τ) = 0 21 22 21 21 ∂t Thus Φ21 (t, τ) is identically zero. But then _∂_ Φ (t, τ) = A (t)Φ (t, τ) , Φ (τ, τ) = I ii ii ii ii ∂t for i = 1, 2, and _∂_ Φ (t, τ) = A (t)Φ (t, τ) + A (t)Φ (t, τ) , Φ (τ, τ) = 0 12 11 12 12 22 12 ∂t Using Exercise 4.6 with F (t) = A 12 (t) Φ22 (t, τ) gives t Φ12 (t, τ) = ∫ Φ11 (t, σ) A 12 (σ) Φ22 (σ, τ) d σ τ Solution 4.17 We need to compute a continuously-differentiable, invertible P (t) such that t 1 = P −1 (t) 1 t . 0 1 P (t) − P −1 (t)P (t) 2 2−t 2 t Multiplying on the left by P (t), the result can be written as a dimension-4 linear state equation. Choosing the initial condition corresponding to P (0) = I, some clever guessing gives 1 0 P (t) = t 1 Solution 4.23 Using the formula for the derivative of an inverse matrix given in Exercise 1.17, _∂_ Φ (−τ, −t) = _∂_ Φ −1 (−t, −τ) = −Φ −1 (−t, −τ) A A A ∂t ∂t = −Φ −1 A (−t, −τ) − = −Φ −1 A (−t, −τ) _∂_ Φ (−t, −τ) Φ −1 (−t, −τ) A A ∂t ∂ _____ ΦA (−t, −τ) Φ −1 A (−t, −τ) ∂(−t) −A (−t)ΦA (−t, −τ) Φ −1 A (−t, −τ) = Φ −1 A (−t, −τ) A (−t) = ΦA (−τ, −t) A (−t) Transposing gives -20- Linear System Theory, 2/E Solutions Manual _∂_ Φ T (−τ, −t) = A T (−t)Φ T (−τ, −t) A A ∂t Since Φ(−τ, −τ) = I, we have F (t) = A T (−t). Or we can use the result of Exercise 3.2 to compute: ∂ _∂_ Φ (−τ, −t) =− _____ ΦA (−τ, −t) = ΦA (−τ, −t)A (−t) A ∂(−t) ∂t This implies _∂_ Φ T (−τ, −t) = A T (−t)Φ (−τ, −t) A A ∂t Since Φ(−τ, −τ) = I, we have F (t) = A T (−t). Solution 4.25 We can write t+σ Φ(t + σ, σ) = I + ∫σ A (τ) d τ + ∞ t +σ Σ ∫ k =2 σ τ1 A (τ1 ) ∫ A (τ2 ) . . . τk−1 σ ∫σ A (τk ) d τk . . . d τ1 and e _ At (σ)t _ = I + At (σ)t + Then R (t, σ) = Φ(t + σ, σ) − e t +σ ∞ = Σ k =2 ∫σ _ At (σ)t ∞ Σ k =2 _ 1 k ___ A t (σ)t k k! τ1 A (τ1 ) ∫ A (τ2 ) . . . τk−1 σ ∫σ _ 1 k ___ A t (σ)t k A (τk ) d τk . . . d τ1 − k! From A (t) ≤ α and the triangle inequality, R (t, σ) ≤ ∞ 2 αk Σ k =2 k _t__ = α2 t 2 k! ∞ Σ k =2 2 k−2 k−2 ___ α t k! Using 1 2 ______ ___ , k ≥2 ≤ (k−2)! k! gives R (t, σ) ≤ α2 t 2 ∞ Σ k =2 = α2 t 2 e α t -21- 1 ______ αk−2 t k−2 (k−2)! CHAPTER 5 Solution 5.3 Using the series definition, which involves talent in series recognition, A 2k +1 = 0 1 , A 2k = 1 0 1 0 , k = 0, 1, . . . 0 1 gives = 0 t ___ 1 + t 0 2! e At = I + −t t 2 0 ___ 1 + 0 t2 3! −t (e +e )/ 2 (e −e )/ 2 = (e t −e −t )/ 2 (e t +e −t )/ 2 t t 0 t3 + ... t3 0 cosh t sinh t sinh t cosh t Using the Laplace transform method, 1 _____ 2 s −1 s _____ 2 s −1 (sI − A)−1 = s −1 −1 s −1 = s _____ 2 s −1 1 _____ 2 s −1 which gives again e At = cosh t sinh t sinh t cosh t Using the diagonalization method, computing eigenvectors for A and letting 1 1 P= 1 −1 gives P −1 AP = 1 0 0 −1 Then e At = P et 0 0 e −t P −1 = cosh t sinh t sinh t cosh t Solution 5.4 Since A (t) = t 1 1 t commutes with its integral, -22- Linear System Theory, 2/E Solutions Manual t ∫ A (σ) d σ Φ(t, 0) = e 0 t2/2 t t t2/2 = exp And since t2 / 2 0 , 0 t2/2 0 t t 0 commute, Φ(t, 0) = exp 1 0 2 t / 2 . exp 0 1 0 1 t 1 0 Using Exercise 5.3 gives Φ(t, 0) = 2 e t /2 0 2 0 e t /2 cosh t sinh t = sinh t cosh t 2 2 e t /2 cosh t e t /2 sinh t 2 2 e t /2 sinh t e t /2 cosh t Solution 5.7 To verify that t A ∫ e A σ d σ = e At − I 0 note that the two sides agree at t = 0, and the derivatives of the two sides with respect to t are identical. If A is invertible and all its eigenvalues have negative real parts, then limt → ∞ e At = 0. This gives ∞ A ∫ e Aσ d σ = − I 0 that is, ∞ 0 0 ∞ A −1 = − ∫ e A σ d σ = ∫ e A σ d σ Solution 5.9 Evaluating the given expression at t = 0 gives x (0) = 0. Using Leibniz rule to differentiate the expression gives t D ∫ u ( τ) d τ . _d_ e A (t−σ) e σ bu (σ) d σ x (t) = ∫ dt 0 t t t _∂_ ∂t = bu (t) + ∫ 0 e A (t−σ) e D ∫ u ( τ) d τ σ bu (σ) d σ t D ∫ u ( τ) d τ Using the product rule and differentiating the power series for e σ gives t t . x (t) = bu (t) + ∫ 0 Ae A (t−σ) e D ∫ u ( τ) d τ σ t bu (σ) + e A (t−σ) Du (t)e D ∫ u ( τ) d τ σ bu (σ) d σ If we assume that AD = DA, then e A (t−σ) D = De A (t−σ) and -23- Linear System Theory, 2/E Solutions Manual t t D ∫ u ( τ) d τ D ∫ u ( τ) d τ . x (t) = bu (t) + A ∫ e A (t−σ) e σ bu (σ) d σ + Du (t) ∫ e A (t−σ) e σ bu (σ) d σ t t 0 0 = A x (t) + Dx (t)u (t) + bu (t) Solution 5.12 We will show how to define β0 (t), . . . , βn−1 (t) such that n−1 Σ . βk (t)Pk = k =0 n−1 Σ n−1 Σ βk (0)Pk = I βk (t)APk , k =0 (*) k =0 which then gives the desired expression by Property 5.1. From the definitions, P 1 = AP 0 − λ1 I , P 2 = AP 1 − λ2 P 1 , . . . , Pn−1 = APn−2 − λn−1 Pn−2 Also Pn = (A−λn I)Pn−1 = 0 by the Cayley-Hamilton theorem, so APn−1 = λn Pn−1 . Now we equate coefficients of like Pk ’s in (*), rewritten as n−1 . n−1 Σ βk (t)Pk = Σ βk (t)[Pk+1 + λk +1 Pk ] k =0 k =0 to get equations for the desired βk (t)’s: . P 0 : β0 (t) = λ1 β0 (t) . P 1 : β1 (t) = β0 (t) + λ2 β1 (t) . . . . Pn−1 : βn−1 (t) = βn−2 (t) + λn βn−1 (t) that is, . β. 0 (t) β1 (t) . . . . = βn−1 (t) λ1 0 . . . 1 λ2 . . . 0 0 . . . 0 0 . . . 0 . . . . . . . . . 0 0 . . . λn−1 0 0 . . . 1 λn β0 (t) β1 (t) . . . βn−1 (t) With the initial condition provided by β0 (0) = 1, βk (0) = 0, k = 1, . . . , n−1, the analytic solution of this state equation provides a solution for (*). (The resulting expression for e At is sometimes called Putzer’s formula.) Solution 5.17 Write, by Property 5.11, Φ(t, to ) = P −1 (t)e R (t−to ) P (to ) where P (t) is continuous, T-periodic, and invertible at each t. Let S = P −1 (to )RP (to ) , Q (t, to ) = P −1 (t)P (to ) Then Q (t, to ) is continuous and invertible at each t, and satisfies Q (t +T, to ) = P −1 (t +T)P (to ) = P −1 (t)P (to ) = Q (t, to ) with Q (to , to ) = I. Also, -24- Linear System Theory, 2/E Solutions Manual Φ(t, to ) = P −1 (t) e P (to )SP −1 (to ) (t−to ) = Q (t, to )e P (to ) = P −1 (t)P (to ) e S(t−to ) P −1 (to )P (to ) S (t−to ) Solution 5.19 From the Floquet decomposition and Property 4.9, T ∫ tr [A (σ)] d σ det Φ(T, 0) = det e RT = e 0 Because the integral in the exponent is positive, the product of eigenvalues of Φ(T, 0) is greater than unity, which implies that at least one eigenvalue of Φ(T, 0) has magnitude greater than unity.Thus by the argument following Example 5.12 there exist unbounded solutions. Solution 5.20 Following the hint, define a real matrix S by e S 2T = Φ2 (T, 0) and set Q (t) = Φ(t, 0)e −St Clearly Q (t) is real and continuous, and Q (t +2T) = Φ(t +2T, 0)e −S (t +2T) = Φ(t +2T, T)Φ(T, 0)e −S 2T e −St = Φ(t +T, 0)Φ(T, 0)e −S 2T e −St = Φ(t +T, T)Φ2 (T, 0)e −S 2T e −St = Φ(t +T, T)e −St = Φ(t, 0)e −St = Q (t) That is, Q (t) is 2T-periodic. (For a proof of the hint, see Chapter 8 of D.L. Lukes, Differential Equations: Classical to Controlled, Academic Press, 1982.) Solution 5.22 The solution will be T-periodic for initial state xo if and only if xo satisfies (see text equation (32)) to +T [Φ −1 (to +T, to ) − I ] xo = ∫ Φ(to , σ)f(σ) d σ to This linear equation has a solution for xo if and only if to +T z To ∫ Φ(to , σ)f(σ) d σ = 0 (*) to for every nonzero vector zo that satisfies T [ Φ−1 (to +T, to ) − I ] zo = 0 The solution of the adjoint state equation can be written as T z (t) = [ Φ−1 (t, to ) ] zo Then by Lemma 5.14, (**) is precisely the condition that z (t) be T-periodic. Thus writing (*) in the form -25- (**) Linear System Theory, 2/E Solutions Manual to +T 0= ∫ to +T z To Φ(to , σ)f(σ) d σ = to ∫ z T (σ)f (σ) d σ to completes the proof. Solution 5.24 Note A = −A T , and from Example 5.9, e At = cos t sin t −sin t cos t Therefore all solutions of the adjoint equation are periodic, with period of the form k 2π, where k is a positive integer. The forcing term has period T = 2π /ω, where we assume ω > 0. The rest of the analysis breaks down into 3 cases. Case 1: If ω ≠ 1, 1/ 2, 1/ 3, . . . then the adjoint equation has no T-periodic solution, so the condition (Exercise 5.22) T ∫ z T (σ)f (σ) d σ = 0 (+) 0 holds vacuously. Thus there will exist corresponding periodic solutions. Case 2: If ω = 1, then T ∫z 0 T T (σ)f (σ) d σ = ∫ z To e A σ f (σ) d σ 0 T T 0 0 = −zo 1 ∫ sin2 (σ) d σ + zo 2 ∫ cos σ sin σ d σ ≠0 so there is no periodic solution. Case 3: If ω = 1/k, k = 2, 3, . . . , then since T T 0 0 ∫ cos σ sin (σ/k) d σ = ∫ sin σ sin (σ/k) d σ = 0 the condition (+) will hold, and there exist periodic solutions. In summary, there exist periodic solutions for all ω > 0 except ω = 1. -26- CHAPTER 6 If the state equation is uniformly stable, then there exists a positive γ such that for any to and xo the corresponding solution satisfies Solution 6.1 x (t) ≤ γxo , t ≥ to Given a positive ε, take δ = ε / γ. Then, regardless of to , xo ≤ δ implies x (t) ≤ γ δ = ε , t ≥ to Conversely, given a positive ε suppose positive δ is such that, regardless of to , xo ≤ δ implies x (t) ≤ ε, t ≥ to . For any ta ≥ to let xa be such that xa = 1 , Φ(ta , to )xa = Φ(ta , to ) Then xo = δ xa satisfies xo = δ, and the corresponding solution at t = ta satisfies x (ta ) = Φ(ta , to )xo = δΦ(ta , to ) ≤ ε Therefore Φ(ta , to ) ≤ ε / δ Such an xa can be selected for any ta , to such that ta ≥ to . Therefore Φ(t, to ) ≤ ε / δ for all t and to with t ≥ to , and we can take γ = ε / δ to obtain x (t) = Φ(t, to )xo ≤ Φ(t, to )xo ≤ γxo , t ≥ to This implies uniform stability. Solution 6.4 Using the fact that A (t) commutes with its integral, t Φ(t, τ) = e ∫ A (σ) d σ τ =I+ t−τ e −(t−τ) 1 ___ −e −(t−τ) + t−τ 2! t−τ e −(t−τ) −e −(t−τ) t−τ 2 + ... For any fixed τ, φ11 (t, τ) clearly grows without bound as t → ∞, and thus the state equation is not uniformly stable. Solution 6.6 Using elementary properties of the norm, -27- Linear System Theory, 2/E Solutions Manual Φ(t, τ) = I t t τ τ σ1 + ∫ A (σ ) d σ + ∫ A (σ 1 ) ∫τ A (σ2 ) d σ2 d σ1 + . . . t t τ τ = I + ∫ A (σ) d σ + ∫ A (σ1 ) σ1 ∫τ A (σ2 ) d σ2 d σ1 + . . . t t σ1 τ τ τ = 1 + ∫ A (σ) d σ + ∫ A (σ1 ) ∫ A (σ2 ) d σ2 d σ1 + ... (Be careful of t < τ.) Since A (t) ≤ α for all t, t Φ(t, τ) ≤ 1 + α∫ 1 d σ + α 2 ∫τ ∫τ 1 d σ2 d σ1 + . . . τ = 1 + αt−τ + α2 σ1 t t−τ2 _|_____ + ... 2! For | t−τ ≤ δ, Φ(t, τ) ≤ 2 2 δ _α ____ + ... 2! 1+α δ+ = eα δ Solution 6.8 See the proof of Theorem 15.2. Solution 6.10 Write Re [λ] = −η, where η > 0 by assumption, so that t e λt = t e −ηt , t ≥0 A simple maximization argument (setting the derivative to zero) gives t e −ηt ≤ 1 ∆ ___ =β , ηe t ≥0 so that t e λt ≤ β , t ≥0 Using this bound we can write t e λt = t e −ηt = t e −(η/2)t e −(η/2)t ≤ 2 −(η/2)t ___ e , ηe t ≥0 Similarly, t 2 e λt = t 2 e −ηt ≤ 4 −(η/4)t 2 . ___ 2 2 ___ ___ ___ e , t e −(η/4)t e −(η/4)t ≤ t e −(η/2)t = η e η e η e ηe and continuing we get, for any j ≥ 0, j +( j −1)+ +1 j _2___________ e −(η/2 )t , t ≥ 0 j (η e) ... t j e λt ≤ Therefore -28- t ≥0 Linear System Theory, 2/E Solutions Manual ∞ ∫ t j e λt dt ≤ 0 j +( j −1)+ +1 _2___________ (η e) j ... ∞ ∫ e −(η/2 )t dt j 0 j +( j −1)+ . . . +1 ≤ 2j _2___________ . ___ j η (η e) = 22j +( j −1)+ +1 _____________ e j Re [λ] j +1 ... By Theorem 6.4 uniform stability is equivalent to existence of a finite constant γ such that all t ≥ 0. Writing Solution 6.12 e At ≤ γ for e At = m σk t j−1 λ t ______ e k ( j−1)! Σ Σ Wkj k =1 j =1 where λ1 , . . . , λm are the distinct eigenvalues of A, suppose Re[λk ] ≤ 0 , k = 1, . . . , m (*) Re[λk ] = 0 implies σk = 1 λk t λ t Since t e is bounded if Re[λk ] < 0 (for any j), and e k = 1 if Re [λk ] = 0, it is clear that bounded for t ≥ 0. Thus (*) is a sufficient condition for uniform stability. A necessary condition for uniform stability is j−1 e At is Re[λk ] ≤ 0 , k = 1, . . . , m For if Re[λk ] > 0 for some k, the proof of Theorem 6.2 shows that e At grows without bound as t → ∞. The gap between this necessary condition and the sufficient condition is illustrated by the two cases 0 0 0 1 A= , A= 0 0 0 0 Both satisfy the necessary condition, neither satisfy the sufficient condition, and the first case is uniformly stable while the second case is not (unbounded solutions exist, as shown by easy computation of the transition matrix). (It can be shown that a necessary and sufficient condition for uniform stability is that each eigenvalue of A has nonpositive real part and any eigenvalue of A with zero real part has algebraic multiplicity equal to its geometric multiplicity.) Solution 6.14 Suppose γ, λ > 0 are such that Φ(t, to ) ≤ γ e −λ(t−to ) for all t, to such that t ≥ to . Then given any xo , to , the corresponding solution at t ≥ to satisfies x (t) = Φ(t, to )xo ≤ Φ(t, to )xo ≤ γ e −λ(t−to ) xo and the state equation is uniformly exponentially stable. Now suppose the state equation is uniformly exponentially stable, so that there exist γ, λ > 0 such that x (t) ≤ γ e −λ(t−to ) xo , t ≥ to for any xo and to . Given any to and ta ≥ to , choose xa such that Φ(ta , to )xa = Φ(ta , to ) , xa = 1 Then with xo = xa the corresponding solution at ta satisfies -29- Linear System Theory, 2/E Solutions Manual x (ta ) = Φ(ta , to )xa = Φ(ta , to ) ≤ γ e −λ(ta −to ) Since such an xa can be selected for any to and ta > to , we have Φ(t, τ) ≤ γ e −λ(t−τ) for all t, τ such that t ≥ τ, and the proof is complete. . Solution 6.18 The variable change z (t) = P −1 (t) x (t) yields z (t) = 0 if and only if . P −1 (t) A (t)P (t) − P −1 (t)P (t) = 0 . for all t. This clearly is equivalent to P (t) = A (t)P (t), which is equivalent to ΦA (t, τ) = P (t)P −1 (τ). Now, if P (t) is a Lyapunov transformation, that is P (t) ≤ ρ < ∞ and det P (t) ≥ η > 0 for all t, then ΦA (t, τ) ≤ P (t)P −1 (τ) ≤ P (t) P (τ)n−1 __________ det P (τ) ∆ ≤ ρn /η = γ for all t and τ. Conversely, suppose ΦA (t, τ) ≤ γ for P (t) ≤ all t and τ. Let P (t) = ΦA (t, 0). Then P (t) ≤ γ and P −1 (t)n−1 ___________ = P −1 (t)n−1 det P (t) det P −1 (t) for all t. Using P (t) ≥ 1/P −1 (t) gives det P (t) ≥ 1 __________ −1 P (t)n and since P −1 (t) = ΦA (0, t) ≤ γ, det P (t) ≥ 1 ___ γn Thus P (t) is a Lyapunov transformation, and clearly . P −1 (t) A (t)P (t) − P −1 (t)P (t) = 0 for all t. -30- CHAPTER 7 Solution 7.3 Let  = FA, and take Q = F −1 , which is positive definite since F is positive definite. Then since F is symmetric, T  Q + Q = A T FF −1 + F −1 FA = A T + A < 0 This gives exponential stability by Theorem 7.4. Solution 7.5 By our default assumptions, a (t) is continuous. Since Q is constant, symmetric, and positive definite, the first condition of Theorem 7.2 holds. Checking the second condition, −a (t) −a (t)/ 2 ≤0 A T (t)Q + QA (t) = −a (t)/ 2 −1 gives the requirements a (t) ≥ 0 , 4a (t) ≥ a 2 (t) Thus the state equation is uniformly stable if a (t) is a continuous function satisfying 0 ≤ a (t) ≤ 4 for all t. Solution 7.6 With Q(t) = . a (t) 0 , A T (t)Q(t) + Q(t)A (t) + Q (t) = 0 1 . a (t) 0 0 −4 we need to assume that a (t) is continuously differentiable and η ≤ a (t) ≤ ρ for some positive constants η and ρ so . that the first condition of Theorem 7.4 is satisfied. For the second condition we need to assume a (t) ≤ −ν, for some positive constant ν. Unfortunately this implies, taking any to , t . a (t) = a (to ) + ∫ a (σ) d σ ≤ a (to ) + ν to − ν t , t ≥ to to and for sufficiently large t the positivity condition on a (t) will be violated. Thus there is no a (t) for which the given Q (t) shows uniform exponential stability of the given state equation. Solution 7.9 We need to assume that a(t) is continuously differentiable. Consider Q (t) − η I = 2a (t)+1−η 1 Suppose there exists a small, positive constant η such that -31- 1 (t)+1 _a______ −η a (t) Linear System Theory, 2/E Solutions Manual η ≤ a (t) ≤ 1/ (2η) for all t. Then 2a (t) + 1 − η ≥ η + 1 > 1 1 (t)+1 ______ _a______ = 1+η > 1 −η ≥ 1+ 1/ (2η) a (t) and Q (t)−ηI ≥ 0, for all t, follows easily. Similarly, with ρ = (2η+1)/ η we can show ρI−Q (t) ≥ 0 using 1 η+1 ___ _2____ −1 = 1 −2 η 2η 1 η+1 (t)+1 _2____ ____ _a______ ≥1 −1− ≥ ρ− η a (t) a (t) ρ − 2a (t) − 1 ≥ Next consider . A (t)Q (t) + Q (t) A (t) + Q (t) = . 2a (t)−2a(t) T 0 0 . a (t) _____ −2a(t)− 2 a (t) ≤ −ν I This gives that for uniform exponential stability we also need existence of a small, positive constant ν such that . ν a 2 (t) − 2a 3 (t) ≤ a (t) ≤ a (t)−ν/2 for all t. For example, a (t) = 1 satisfies these conditions. Solution 7.11 Suppose that for every symmetric, positive-definite M there exits a unique, symmetric, positive-definite Q such that A T Q + QA + 2µQ = −M (*) (A + µ I)T Q + Q (A + µ I) = −M (**) that is, Then by the argument above Theorem 7.11 we conclude that all eigenvalues of A +µ I have negative real parts. That is, if 0 = det [ λI − (A +µ I) ] = det [ (λ − µ)I − A ] then Re [λ] < 0. Since µ > 0, this gives Re [λ − µ] < −µ, that is, all eigenvalues of A have real parts strictly less than −µ. Now suppose all eigenvalues of A have real parts strictly less than −µ. Then, as above, eigenvalues of A + µ I have negative real parts. Then by Theorem 7.11, given symmetric, positive-definite M there exists a unique, symmetric, positive-definite Q such that (**) holds, which implies (*) holds. Solution 7.16 For arbitrary but fixed t ≥ 0, let xa be such that xa = 1 , e At xa = e At By Theorem 7.11 the unique solution of QA + A T Q = −M is the symmetric, positive-definite matrix ∞ Q = ∫ e A σ Me A σ d σ T 0 Thus we can write -32- Linear System Theory, 2/E Solutions Manual ∞ ∞ ∫ x Ta e A σ Me A σ xa d σ ≤ ∫ x Ta e A σ Me A σ xa d σ T T t 0 = x Ta Qxa ≤ λmax (Q) = Q Also, using a change of integration variable from σ to τ = σ − t, ∞ ∞ ∫ x Ta e A σ Me A σ xa d σ = ∫ x Ta e A (t + τ) Me A(t + τ) xa d τ T T t 0 = x Ta e A t Qe At xa ≥ λmin (Q)e At xa 2 = T e At 2 _______ Q −1 Therefore e At 2 _______ ≤ Q Q −1 Since t was arbitrary, this gives Q −1 max e At ≤ √Q t≥0 Solution 7.17 Let F = A + (µ−ε)I. Then F ≤ A +µ−ε, all eigenvalues of F have real parts less than −ε, and e Ft = e At e (µ−ε)t Thus e At = e −(µ − ε)t e Ft (*) By Theorem 7.11 the unique solution of F Q + QF = −I is T ∞ Q = ∫ e F σ e Fσ d σ T 0 For any n × 1 vector x, T d T F Tσ Fσ ___ x e e x = x Te F σ [ F T + F ] e Fσ x dσ ≥ −F T + F x T e F σ e F σ x T (Exercise1.9) ≥ −2(A +µ−ε) x T e F σ e F σ x T Thus for any t ≥ 0, ∞ −x T e F t e Ft x = T ∫ t d ___ dσ x Te F σ e Fσ x T dσ ∞ ≥ −2 (A +µ−ε) ∫ x T e F σ e F σ x d σ T t ≥ −2 (A +µ−ε) x T Qx Therefore -33- Linear System Theory, 2/E Solutions Manual x T e F t e Ft x ≤ 2 (A +µ−ε) x T Qx , t ≥ 0 T which gives e Ft ≤ √2 ( A + µ − ε ) Q , t ≥ 0 Thus the desired inequality follows from (*). Solution 7.19 To show uniform exponential stability of A (t), write the 1,2-entry of A (t) as a (t), and let Q (t) = q (t) I, where 2+e −2t , t ≥ 1/ 2 q (t) = q ⁄ (t) , −1/ 2 < t < 1/ 2 3 , t ≤ −1/ 2 1 2 Here q ⁄ (t) is a continuously-differentiable ‘patch’ satisfying 2 ≤ q ⁄ (t) ≤ 3 for −1/ 2 < t < 1/ 2, and another condition to be specified below. Then we have 2 I ≤ Q (t) ≤ 3 I for all t. Next consider . . −2q (t)+q (t) a (t)q (t) A T (t)Q (t) + Q (t)A (t) + Q (t) = ≤ −ν I . a (t)q (t) −6q (t)+q (t) 1 1 2 2 We choose ν = 1 and show that . −2q (t)+q (t)+1 a (t)q (t) ≤0 . a (t)q (t) −6q (t)+q (t)+1 . for all t. With t < −1/ 2 or t > 1/ 2 it is easy to show that q (t)−q (t)−1 ≥ 0, and a patch function can be sketched such that this inequality is satisfied for −1/ 2 < t < 1/ 2. Then, for all t, . . −2q (t)+q (t)+1 ≤ −q (t) ≤ 0 , −6q (t)+q (t)+1 ≤ −5q (t) ≤ 0 . . [−2q (t)+q (t)+1][−6q (t)+q (t)+1] − a 2 (t)q 2 (t) ≥ [5−a 2 (t)]q 2 (t) ≥ 4q 2 (t) ≥ 0 Thus we have proven uniform exponential stability. To show A T (t) is not uniformly exponentially stable, write the state equation as two scalar equations to compute ΦA T (t) (t, 0) = e −t 0 (e t −e −3t )/ 4 e −3t , t ≥0 and the existence of unbounded solutions is clear. Using the characterization of uniform stability in Exercise 6.1, given ε > 0, let δ = β−1 (α(ε)). Then δ > 0, since α(ε) > 0, and the inverse exists since β(.) is strictly increasing. Then for any to , and any xo such that xo ≤ δ, the corresponding solution is such that Solution 7.20 v (t, x (t)) ≤ v (to , xo ) ≤ β(xo ) ≤ β(δ) = α(ε) , t ≥ to Therefore α(x (t)) ≤ v (t, x (t)) ≤ α(ε) , t ≥ to But since α(.) is strictly increasing, this gives x (t) ≤ ε , t ≥ to , and thus the state equation is uniformly stable. -34- CHAPTER 8 Solution 8.3 No. The matrix −2 √8 0 −1 A= has negative eigenvalues, but −4 √8 √8 −2 A + AT = has an eigenvalue at zero. Solution 8.6 Viewing F (t)x (t) as a forcing term, for any to , xo , and t ≥ to we can write t x (t) = ΦA +F (t, to ) xo = ΦA (t, to ) xo + ∫ ΦA (t, σ)F (σ) x(σ) d σ to which gives, for suitable constants γ, λ > 0, x (t) ≤ γ e −λ(t−to ) t xo + ∫ γ e −λ(t−σ) F (σ)x(σ) d σ to Thus t e λt x (t) ≤ γ e λto xo + ∫ γF (σ) e λσ x(σ) d σ to and the Gronwall-Bellman inequality (Lemma 3.2) implies t e λt x (t) ≤ γ e λto xo e Therefore -35- ∫ γF (σ) d σ to Linear System Theory, 2/E Solutions Manual t x(t) ≤ γ e −λ(t−to ) ∫ γF (σ) d σ t eo xo ∞ ≤γe −λ(t−to ) ≤γe −λ(t−to ) ∫ γF (σ) d σ t eo eγ β xo xo and we conclude the desired uniform exponential stability. Solution 8.8 We can follow the proof of Theorem 8.7 (first and last portions) to show that the solution ∞ Q (t) = ∫ e A T (t)σ e A (t)σ d σ 0 of A T (t)Q (t) + Q (t) A (t) = −I is continuously-differentiable and satisfies, for all t, ηI ≤ Q (t) ≤ ρI where η and ρ are positive constants. Then with . F (t) = A (t) − 1⁄2Q −1 (t)Q (t) an easy calculation shows . F T (t)Q (t) + Q (t)F (t) + Q (t) = A T (t)Q (t) + Q (t) A (t) = −I Thus . x (t) = F (t) x (t) is uniformly exponentially stable by Theorem 7.4. Solution 8.9 As in Exercise 8.8 we have, for all t, ηI ≤ Q (t) ≤ ρI which implies Q −1 (t) ≤ 1 __ η Also, by the middle portion of the proof of Theorem 8.7, . . Q (t) ≤ 2A (t)Q (t)2 Therefore . 1⁄2Q −1 (t)Q (t) ≤ 2 _βρ ___ η for all t. Write . . . x (t) = A (t) x (t) = [ A (t) − 1⁄2Q −1 (t)Q (t) ] x (t) + 1⁄2Q −1 (t)Q (t) x (t) . ∆ = F (t) x (t) + 1⁄2Q −1 (t)Q (t) x (t) -36- Linear System Theory, 2/E Solutions Manual Then the complete solution formula gives t . x (t) = ΦF (t, to ) xo + ∫ ΦF (t, σ) 1⁄2Q −1 (σ)Q (σ) x(σ) d σ to and the result of Exercise 8.8 implies that there exists positive constants γ, λ such that, for any to and t ≥ to , x (t) ≤ γ e −λ(t−to ) t xo + ∫ γ e −λ(t−σ) 2 _βρ ___ x(σ) d σ η to Therefore t e λt x (t) ≤ γ e λto xo + ∫ to 2 _γβρ ____ e λσ x(σ) d σ η and the Gronwall-Bellman inequality (Lemma 3.2) implies t e λt x (t) ≤ γ e λto xo e ∫ γβρ2 /η d σ to Thus x (t) ≤ γ e −(λ−γβρ2 /η)(t−to ) xo Now, writing the left side as ΦA (t, to )xo and for any to and t ≥ to choosing the appropriate unity-norm xo gives ΦA (t, to ) ≤ γ e −(λ−γβρ2 /η)(t−to ) For β sufficiently small this gives the desired uniform exponential stability. (Note that Theorem 8.6 also can be . used to conclude that uniform exponential stability of x (t) = F (t) x (t) implies uniform exponential stability of . . x (t) = [ F (t) + 1⁄2Q −1 (t)Q (t) ] x (t) = A (t) x (t) for β sufficiently small.) With F (t) = A (t) + (µ / 2)I we have that F (t) satisfy Re [λF (t)] ≤ −µ / 2. The unique solution of Solution 8.10 F(t) ≤ α + µ / 2, . . F (t) = A (t), and the eigenvalues of F T (t)Q (t) + Q (t)F (t) = −I is ∞ Q (t) = ∫ e F T (t)σ e F (t)σ d σ 0 As in the proof of Theorem 8.7, there is a constant ρ such that Q (t) ≤ ρ for all t. Now, for any n × 1 vector z, T d T F T (t)σ F (t)σ ___ z e e z = z T e F (t)σ [ F T (t) + F (t) ] e F (t)σ z dσ ≥ −(2α + µ) z T e F Thus for any τ ≥ 0, -37- T (t)σ e F (t)σ z Linear System Theory, 2/E Solutions Manual ∞ −z T e F T (t)τ e F (t)τ z = ∫τ d ___ dσ z Te F T (t)σ e F (t)σ z dσ ∞ ≥ −(2α + µ) ∫ z T e F T (t)σ e F (t)σ z d σ T (t)σ e F (t)σ z d σ τ ∞ ≥ −(2α + µ) ∫ z T e F 0 ≥ −(2α + µ) z T Q (t) z Thus eF T (t)τ e F (t)τ ≤ (2α + µ) Q (t) , τ ≥ 0 and using e F(t)τ = e A(t)τ e (µ /2) τ , τ ≥ 0 gives e A(t)τ ≤ α + µ )ρ e (−µ /2) τ , √(2 τ≥0 Solution 8.11 Write (the chain rule is valid since u (t) is a scalar) . q (t) = −A −1 (u (t)) . . db dA ___ ___ (u (t))u (t) (u (t))u (t) A −1 (u (t))b (u (t)) − A −1 (u (t)) du du . = −B̂(t)u (t) ∆ Then . x (t) = A (u (t)) x (t) + b (u (t)) = A (u (t)) [ x (t) − q (t) ] + A (u (t))q (t) + b (u (t)) = A (u (t)) [ x (t) − q (t) ] gives _d_ [ x (t) − q (t) ] = A (u (t)) [ x (t) − q (t) ] + B̂(t)u. (t) dt (*) Since . . dA dA ___ _d_ A (u (t)) = ___ (u (t))u (t) (u (t))u (t) = du du dt . we can conclude from Theorem 8.7 that for δ sufficiently small, and u (t) such that u (t) ≤ δ for all t, there exist positive constants γ and η (depending on u (t)) such that ΦA (u (t)) (t, σ) ≤ γ e −η (t−σ) , t ≥σ≥0 But the smoothness assumptions on A (.) and b (.) and the bounds on u (t) also give that there exists a positive constant β such that B̂(t) ≤ β for t ≥ 0. Thus the solution formula for (*) gives x (t) − q (t) ≤ γx (0) − q (0) + γ βδ / η for u (t) as above, and the claimed result follows. -38- , t ≥0 CHAPTER 9 Solution 9.7 Write B (A−βI)B (A−βI)2 B . . . = B A 2 B−2βAB+β2 B . . . AB−βB B AB A 2 B . . . = Im −β Im β 2 Im 0 Im −2βIm 0 0 Im 0 0 0 . . . . . . . . . ... ... ... ... . . . Clearly the two controllability matrices have the same rank. (The solution is even easier using rank tests from Chapter 13.) Solution 9.8 Since A has negative-real-part eigenvalues, ∞ Q = ∫ e At BB T e A t dt T 0 is well defined, symmetric, and ∞ T AQ + QA = ∫ 0 ∞ = _d_ dt ∫ 0 T T Ae At BB T e A t + e At BB T e A t A T e At BB T e A T t dt dt = −BB T Also it is clear that Q is positive semidefinite. If it is not positive definite, then for some nonzero, n × 1 x, ∞ 0 = x Qx = ∫ x T e At BB T e A t x dt T T 0 ∞ = ∫ x T e At B 2 dt 0 Thus x e B = 0 for all t ≥ 0, and it follows that T At -39- Linear System Theory, 2/E Solutions Manual dj ___ dt j 0= x T e At B = x TA jB t =0 for j = 0, 1, 2, . . . . But this implies x T B AB . . . A n−1 B =0 which contradicts the controllability hypothesis. Thus Q is positive definite. Solution 9.9 Suppose λ is an eigenvalue of A, and p is a corresponding left eigenvector. Then p ≠ 0, and p TA = λ p T This implies both _ p HA = λ p H , Now suppose Q is as claimed. Then A T p = λp _ p H AQp + p H QA T p = λ p H Qp + λ p H Qp = −p H BB T p that is, 2Re [λ] p H Q p = −p H BB T p (*) This gives Re [λ] ≤ 0 since Q is positive definite. Now suppose Re [λ] = 0. Then (*) gives p H B = 0. Also, for j = 1, 2, . . . , _ _ p H A j B = λ p H A j−1 B = . . . = λ j p H B = 0 Thus p H B AB . . . A n−1 B =0 which contradicts the controllability assumption. Therefore Re [λ] < 0. Solution 9.10 Let ∆ tf Wy (to , t f ) = ∫ C (t f )Φ(t f , t)B (t)B T (t)ΦT (t f , t)C T (t f ) dt to If Wy (to , t f ) is invertible, given any x(to ) = xo choose u (t) = −B T (t)ΦT (t f , t)C T (t f )W −1 y (to , t f )C (t f )Φ(t f , to ) xo Then the corresponding complete solution of the state equation gives tf y (t f ) = C (t f )Φ(t f , to ) xo − ∫ C (t f )Φ(t f , σ)B (σ)B T (σ)ΦT (t f , σ)C T (t f ) d σ W −1 y (to , t f ) C (t f )Φ(t f , to ) xo to =0 and we have shown output controllability on [to , t f ].. -40- Linear System Theory, 2/E Solutions Manual Now suppose the state equation is output controllable on [to , t f ], but that Wy (to , t f ) is not invertible. Then there exists a p × 1 vector ya ≠ 0 such that y Ta Wy (to , t f )ya = 0. Using by now familiar arguments, this gives y Ta C (t f )Φ(t f , t)B (t) = 0 , t ∈ [to , t f ] Consider the initial state xo = Φ(to , t f )C T (t f )[ C (t f )C T (t f ) ]−1 ya which is well defined and nonzero since rank C (t f ) = p. There exists an input ua (t) such that tf 0 = C (t f )Φ(t f , to ) xo + ∫ C (t f )Φ(t f , σ)B (σ)ua (σ) d σ to tf = ya + ∫ C (t f )Φ(t f , σ)B (σ)ua (σ) d σ to Premultiplying by y Ta gives 0= y Ta ya This contradicts ya ≠ 0, and thus Wy (to , t f ) is invertible. The rank assumption on C (t f ) is needed in the necessity proof to guarantee that xo is well defined. For m = p = 1, invertibility of Wy (to , t f ) is equivalent to existence of a ta ∈ (to , t f ) such that C (t f )Φ(t f , ta )B (ta ) ≠ 0 That is, there exists a ta ∈ (to , t f ) such that the output response at t f to an impulse input at ta is nonzero. Solution 9.11 From Exercise 9.10, since rank C = p, the state equation is output controllable if and only if for some fixed t f > 0, ∆ tf Wy = ∫ Ce A (t f −t) BB T e A T (t f −t) C T dt 0 is invertible. We will show this holds if and only if rank CB CAB . . . CA n−1 B =p by showing equivalence of the negations. If Wy is not invertible, there exists a nonzero p × 1 vector ya such that y Ta Wy ya = 0. Thus y Ta Ce A (t f −t) B = 0 , t ∈ [0, t f ] Differentiating repeatedly, and evaluating at t = t f gives y Ta CA j B = 0 , j = 0, 1, . . . Thus y Ta CB CAB . . . CA n−1 B =0 and this implies rank CB CAB . . . CA n−1 B <p Conversely, if the rank condition fails, then there exists a nonzero ya such that y Ta CA j B = 0, j = 0, . . . , n−1. Then -41- Linear System Theory, 2/E Solutions Manual y Ta Ce A (t f −t) n−1 Σ αk (t f −t) A k B = 0 , B = y Ta C t ∈ [0, t f ] k =0 Therefore y Ta Wy ya = 0, which implies that Wy is not invertible. For m = p = 1 argue as in Solution 9.10 to show that a linear state equation is output controllable if and only if its impulse response (equivalently, transfer function) is not identically zero. Solution 9.17 Beginning with y (t) = c (t)x (t) . . . y (t) = c (t)x (t) + c (t)x (t) . = [c (t) + c (t)A (t)]x (t) + c (t)b (t)u (t) = L 1 (t)x (t) + L 0 (t)b (t)u (t) it is easy to show by induction that k−1 y (k) (t) = Lk (t)x (t) + Σ j =0 Now if d k −j −1 _______ [ L j (t)b (t)u (t) ] , k = 1, 2, . . . dt k −j −1 __ −1 ∆ Ln (t)M = α0 (t) α1 (t) . . . αn −1 (t) then n −1 Σ αi (t)Li (t) = i =0 α0 (t) . . . αn −1 (t) L 0 (t) . . = Ln (t) . Ln −1 (t) Thus we can write y (n) (t) − n −1 n−1 αi (t)y (i) (t) = Ln (t)x (t) + Σ Σ i =0 j =0 − n −1 n −1 i−1 Σ αi (t)Li (t)x (t) − iΣ=0 αi (t) jΣ=0 i =0 n−1 = d n −j −1 _______ [ L j (t)b (t)u (t) ] dt n −j −1 Σ j =0 d i −j −1 ______ [ L j (t)b (t)u (t) ] dt i −j −1 n −1 i−1 d i −j −1 d n −j −1 ______ _______ [ L j (t)b (t)u (t) ] [ L (t)b (t)u (t) α (t) ] − j i Σ Σ i −j −1 dt n −j −1 i =0 j =0 dt This is in the desired form of an n th -order differential equation. -42- CHAPTER 10 Solution 10.2 We show equivalence of full-rank failure in the respective controllability and observability matrices, and thus conclude that one realization is controllable and observable (minimal) if and only if the other is controllable and observable (minimal). First, rank B AB . . . A n−1 B <n if and only if there exits a nonzero, n × 1 vector q such that q T B = q T AB = . . . = q T A n−1 B = 0 This holds if and only if q T B = q T (A+BC)B = . . . = q T (A+BC)n−1 B = 0 which is equivalent to rank B (A+BC)B . . . (A+BC)n−1 B <n Similarly, rank C CA . . . CA n−1 <n if and only if there exists a nonzero, n × 1 vector p such that Cp = CAp = . . . = CA n−1 p = 0 This is equivalent to Cp = C (A+BC)p = . . . = C (A+BC)n−1 p = 0 which is equivalent to rank C C (A+BC) . . . C (A+BC)n−1 Solution 10.9 Since -43- <n Linear System Theory, 2/E Solutions Manual C (t)B (σ) = H (t)F (σ) (*) for all t, σ, picking an appropriate to and t f > to , tf Mx (to , t f )Wx (to , t f ) = ∫ C (t)H (t) dt T to tf ∫ F(σ)B T (σ) d σ (**) to where the left side is a product of invertible matrices by minimality. Therefore the two matrices on the right side are invertible. Let tf T P −1 = M −1 x (to , t f ) ∫ C (t)H (t) dt to T Then multiply both sides of (*) by C (t) and integrate with respect to t to obtain tf Mx (to , t f )B (σ) = ∫ C T (t)H (t) dt F (σ) to for all σ. That is, B (σ) = P −1 F (σ) for all σ. Similarly, (*) gives tf C (t)Wx (to , t f ) = H (t) ∫ F(σ)B T (σ) d σ to that is, tf C (t) = H (t) ∫ F(σ)B T (σ) d σ W −1 x (to , t f ) to But (**) then gives tf tf ∫ F(σ)B T (σ) d σ W −1 ∫ C T (t)H (t) dt x (to , t f ) = to to −1 Mx (to , t f ) = P so we have C (t) = H (t)P for all t. Noting that 0 = P −1 . 0 . P, we have that P is a change of variables relating the two zero-A minimal realizations. Since a change of variables always can be used to obtain a zero-A realization, this shows that any two minimal realizations of a given weighting pattern are related by a variable change. Solution 10.11 Evaluating X (t+σ) = X (t) X (σ) at σ = −t gives that X (t) is invertible, and X −1 (t) = X (−t) for all t. Differentiating with respect to t, and with respect to σ, and using ∂ _∂_ X (t+σ) = ___ X (t+σ) ∂σ ∂t gives -44- Linear System Theory, 2/E Solutions Manual _d_ X (t) X (σ) = X (t) dt d ___ X (σ ) dσ which implies d ___ X (σ) = X (−t) dσ _d_ X (t) X (σ) dt Integrate both sides with respect to t from a fixed to to a fixed t f > to to obtain tf d ___ X (σ) = ∫ X (−t) (t f − to ) dσ to _d_ X (t) dt X (σ) dt Now let A= 1 _____ t f −to tf ∫ X (−t) to _d_ X (t) dt dt to write d ___ X (σ) = A X (σ) , X (0) = I dσ This implies X (σ) = e A σ . (Of course there are quicker ways. For example note that d ∂ ___ _∂_ X (t+σ) = ___ X (σ ) X (t+σ) = X (t) dσ ∂σ ∂t . . Evaluating at σ = 0 gives X (t) = X (t)X (0), which implies . . X (t) = X (0)e X (0)t = e X (0) t Also the result holds for continuous solutions of the functional equation, though the proof is much more difficult.) Solution 10.12 If rank Gi = ri we can write (admittedly using a matrix factorization unreviewed in the text) Gi = Ci Bi where Ci is p × ri , Bi is ri × m, and both have rank ri . Then it is easy to check that A = block diagonal { −λ i Ir i , i = 1, . . . , r }, B= B1 . . . Br , C= C1 . . . Cr is a realization of G (s) of dimension r 1 + . . . + rr = n. We need only show that this realization is controllable and observable. Write B1 0 . . . 0 . . . λ n−1 I 1 m 0 B 2 . . . 0 Im λ 1 Im . . . . . . . . B AB . . . A n−1 B = . . . . . . . . . . . . . . . . . . . λ n−1 I 0 0 . . . Br Im λr Im r m On the right side the first matrix has rank n, while the second is invertible due to its Vandermonde structure and the fact that λ1 , . . . , λr are distinct. This shows controllability. A similar argument shows observability. (Controllability and observability can be shown more easily using rank tests developed in Chapter 13.) -45- CHAPTER 11 Solution 11.4 Since rank b Ab = rank 1 −1 =1 1 −1 the state equation is not minimal. It is easy to compute the impulse response: G (t, σ) = C (t)e A (t−σ) B = (t 2 + 1) e −(t−σ) Then a factorization is obvious, giving a minimal realization . x (t) = e t u (t) y (t) = (t 2 + 1)e −t x (t) Solution 11.7 For the given impulse response, Γ22 (t, σ) = 1+e 2t / 2+e 2σ / 2 e 2σ 0 e 2t It is easy to check that rank Γ22 (t, σ) = 2 for all t, σ, and a little more calculation shows that rank Γ33 (t, σ) = 2. Then a minimal realization is, using formulas in the proof of Theorem 11.3, F(t, σ) = Γ22 (t, σ) Fc (t, σ) = 1+e 2t / 2+e 2σ / 2 e 2σ C (t) = Fc (t, t)F −1 (t, t) = 1 0 B (t) = Fr (t, t) = 1+e 2t e 2t A (t) = Fs (t, t)F −1 (t, t) = e 2t 0 −1 F (t, t) = 2e 2t 0 Solution 11.12 The infinite Hankel matrix is -46- 0 1 0 2 Linear System Theory, 2/E Solutions Manual Γ= 1 1 1 . . . 1 ... 1 ... 1 ... . . . . . . 1 1 1 . . . and clearly the rank condition in Theorem 11.7 is satisfied with l = k = n = 1. Then, following the proof of Theorem 11.7, F = Fs = Fc = Fr = H 1 = H s1 = 1 and a minimal (dimension-1) realization is . x (t) = x (t) + u (t) y (t) = x (t) For the truncated sequence, Γ= 1 1 1 0 . . . 1 1 0 0 . . . 1 0 0 0 . . . ... ... ... ... . . . 0 0 0 0 . . . The rank condition in Theorem 11.7 is satisfied with l = k = n = 3. Taking F = H3 = 1 1 1 1 1 0 , Fs = H s3 = 1 0 0 Fc = 1 1 1 , Fr = 1 1 0 1 0 0 0 0 0 1 1 1 gives a minimal realization specified by A = Fs F −1 = 0 1 0 0 0 1 , B= 0 0 0 1 1 , C= 1 0 0 1 (This is an example of ‘Silverman’s formulas’ in Exercise 11.13. Also, it is not hard to see that truncation of the sequence after any finite number n of 1’s will lead to a minimal realization of dimension n.) Solution 11.13 Writing the rank-n infinite Hankel matrix as Γ= G0 G1 . . . G1 G2 . . . . . . . . . . . . Gn−1 Gn . . . . . . . . . . . . suppose for some 1 ≤ i ≤ n a left-to-right column search yields that the first linearly dependent column is column i. Then there exist scalars α0 , . . . , αi−2 such that column i is given by the linear combination -47- Linear System Theory, 2/E Solutions Manual Gi−1 Gi . . . Gn−2+i . . . = α0 G0 G1 . . . + . . . +α i−2 Gn−1 . . . Gi−2 Gi−1 . . . Gn−3+i . . . By ignoring the top entry, this linear combination shows that column i +1 is given by the same linear combination of the i−1 columns to its left, and so on. Thus by the rank assumption on Γ there cannot exist such an i, and the first n columns of Γ are linearly independent. A similar argument shows that the first n columns of Γn,n +j are linearly independent, for every j ≥ 0, and thus that Γnn is invertible. It remains only to show that the given A, B, C provides a realization for G (s), since minimality is then immediate. Premultiplication by Γnn verifies Γ −1 nn Gk . . . Then, since A = Gn +k−1 Γ snn = ek +1 , k = 0, . . . , n−1 Γ −1 nn , A Gk . . . Gn +k−1 = Γ snn ek +1 = Gk +1 . . . Gn +k , k = 0, . . . , n−1 Now, CB = G 0 , and G0 . . . Gn−1 CA j B = CA j−1 A = ... =C = CA j−1 Gj . . . Gn−1+j G1 . . . Gn = G j , j = 1, . . . , n To complete the verification we use the fact that each dependent column of Γn,n +j is given by the same linear combination of n columns to its left. This follows by writing column n +1 of Γ as a linear combination of the first n (linearly independent) columns, and deleting partitions from the top of the resulting expression. This implies that multiplying any column of Γn,n +j by A gives the next column to the right. Thus CA n +j B = CA j Gn . . . G 2n−1 =C = Gn +j , j = 1, 2, . . . -48- Gn +j . . . G 2n−1+j CHAPTER 12 Solution 12.1 If the state equation is uniformly bounded-input, bounded-output stable, then it is clear from the definition that given δ we can take ε = η δ. Now suppose the ε, δ condition holds. In particular we can take δ = 1 and assume ε is such that, for any to , u (t) ≤ 1 , t ≥ to implies y (t) ≤ ε , t ≥ to Now suppose u (t) is any bounded input signal. Given to let µ = sup u (t). Note µ > 0 can be assumed, for t ≥ to otherwise we have a trivial case. Then u (t)/ µ ≤ 1 for all t ≥ to , and the zero-state response to u (t) satisfies t y (t) = ∫ G (t, σ)u (σ) d σ to t = µ ∫ G (t, σ)u (σ)/µ d σ to ≤ µε = ε sup u (t) , t ≥ to t ≥ to Thus we have sup y (t) ≤ ε sup u (t) t ≥ to t ≥ to and conclude uniform bounded-input, bounded-output stability, with η = ε. Solution 12.8 For any δ > 0, and constant A and B, t W (t−δ, t) = ∫ e A (t−δ−σ) BB T e A T (t−δ−σ) dσ t−δ Changing the variable of integration from σ to τ = t−σ yields δ W (t−δ, t) = e −A δ ∫ e A τ BB T e A τ d τ e −A T T δ 0 It is easy to prove (by showing the equivalence of the negations by contradiction, as in the proof of Theorem 9.5) that this is positive definite if and only if -49- Linear System Theory, 2/E Solutions Manual rank B AB . . . A n−1 B =n Then given δ we can take ε = λmin δ e −A δ ∫ e A τ BB T e A τ d τ e −A δ 0 T For a time-varying example, take scalar a (t) = 0, b (t) = e −t /2 T . Then W (t−δ, t) = e −t (e δ −1) Given any δ > 0, W (t−δ, t) > 0 for all t, but there exists no ε > 0 such that W (t−δ, t) ≥ ε for all t. Solution 12.9 Consider a scalar state equation . x (t) = b (t)u (t) y (t) = x (t) where b (t) is a ‘smooth bump function’ described as follows. It is a continuous, nonnegative function that is zero for t ∉ [0, 1], and has unit area on [0, 1]. Then for any input signal the zero-state response satisfies 1 y (t) ≤ ∫ b (σ)u(σ) d σ 0 for any t. Thus for any to and any t ≥ to , 1 y (t) ≤ u (t) ∫ b(σ) d σ . tsup ≥t o 0 ≤ sup u (t) t ≥ to and the state equation is uniformly bounded-input, bounded-output stable with η = 1. However if we consider a bounded input that is continuous and satisfies 1, 0≤t ≤1 0, t ≥2 u (t) = then limt → ∞ u (t) = 0, but y (t) = 1 for t ≥ 1. The result is true in the time-invariant case, however. Suppose ∞ ∫ G (t) dt = ρ < ∞ 0 and suppose u (t) is continuous, and u (t) → 0 as t → ∞. Then u (t) is bounded, and we let µ = sup u (t). Now t≥0 given ε > 0, pick T 1 > 0 such that ∞ ε ∫ G (t) dt ≤ ___ 2µ T1 and pick T 2 > 0 such that -50- Linear System Theory, 2/E Solutions Manual u (t) ≤ ε ___ , t ≥ T2 2ρ Let T = 2 max [T 1 , T 2 ]. Then for t ≥ T, t y (t) ≤ ∫ G (t−σ)u (σ) d σ 0 T = µ ∫ G (t−σ) d σ + 0 t ε ___ ∫ G (t−σ) d σ 2ρ T Changing the variables of integration gives t y (t) ≤ µ ∫ G (τ) d τ + t−T ≤µ ε ___ 2ρ t−T ∫ G (τ) d τ 0 ε ε ___ ___ ρ =ε + 2ρ 2µ This shows that y (t) → 0 as t → ∞. Solution 12.11 The hypotheses imply that given ε > 0 there exist δ1 , δ2 > 0 such that if xo < δ1 ; u (t) < δ2 , t ≥ to where u (t) is n × 1, then the solution of . x (t) = A (t) x (t) + u (t) , x (to ) = xo satisfies x (t) < ε , t ≥ to In particular, with xo = 0, this shows that if u (t) < δ2 for t ≥ to , then the corresponding zero-state solution of the state equation . x (t) = A (t) x (t) + u (t) y (t) = x (t) (*) satisfies y (t) < ε for t ≥ to . But this implies uniform bounded-input, bounded-output stability by Exercise 12.1. Thus there exists a finite constant α such that the impulse response of (*), which is identical to the transition matrix of A (t), satisfies t ∫ Φ(t, σ) d σ ≤ α to for all t, to such that t ≥ to . Since A (t) is bounded, this gives uniform exponential stability of . x (t) = A (t) x (t) by Theorem 6.8. Solution 12.12 Suppose the impulse response is G (t), where G (t) = 0 for t < 0. For u (t) = e −λt , t ≥ 0, -51- Linear System Theory, 2/E Solutions Manual ∞ ∫ y (t)e −η t dt = 0 ∞ ∫ 0 ∞ = t ∫ G (t−σ)e −λ σ d σ e −η t dt = 0 ∞ ∫ ∫ G (t−σ)e −η t dt 0 0 ∞ ∞ ∫ ∫ G (t−σ)e −λ σ d σ 0 0 e −η t dt e −λ σ d σ where all integrals are well-defined because of the stability assumption, and λ, η > 0. Changing the variable of integration in the inner integral from t to γ = t−σ gives ∞ ∫ y (t)e ∞ −η t dt = 0 ∞ ∫ ∫ G (γ)e −η γ d γ 0 G(s) s = η 0 e −η σ e −λ σ d σ ∞ = = ∫ e −(η+λ)σ d σ 0 1 _____ G(η) η+λ Without the stability assumption we can say that U (s) = 1/(s+λ) for Re [s ] > −λ, and the integral for G (s) converges for Re [s ] > Re [p 1 ], . . . , Re [pn ], where p 1 , . . . , pn are the poles of G (s). Thus ∞ G (s) _____ = ∫ y (t)e −st dt Y (s) = s+λ 0 is valid for Re [s ] > −λ, Re [p 1 ], . . . , Re [pn ]. This implies that if η > −λ, Re [p 1 ], . . . , Re [pn ] then ∞ G (η ) ∫ y (t)e −ηt dt = _____ η+λ 0 even though y (t) may be unbounded. Given u (t), t ≥ 0, and xo , suppose x (t) is a solution of the given state equation. Then with v (t) = y (t) = C x (t) we have . x (t) = A x (t) + Bu (t) , x (0) = xo . z (t) = AP z (t) + AB(CB)−1 C x (t) Solution 12.14 = AP z (t) + A (I − P) x (t) , z (0) = xo Thus . . x (t) − z (t) = AP [ x (t) − z (t) ] + Bu (t) , x (0) − z (0) = 0 and this gives t x (t) − z (t) = ∫ e AP (t−σ) Bu (σ) d σ 0 Since PB = 0 and e AP (t−σ) = n−1 Σ αi (t−σ) (AP)i i =0 -52- Linear System Theory, 2/E Solutions Manual we get t x (t) − z (t) = ∫ α0 (t−σ)Bu (σ) d σ 0 Then . w (t) = −(CB)−1 CAP z (t) − (CB)−1 CAB(CB)−1 C x (t) + (CB)−1 C x (t) = −(CB)−1 CAP z (t) − (CB)−1 CAB(CB)−1 C x (t) + (CB)−1 CA x (t) + (CB)−1 CBu (t) = −(CB)−1 CAP z (t) + (CB)−1 CA [ −B(CB)−1 C + I ] x (t) + u (t) = (CB)−1 CAP[ x (t) − z (t) ] + u (t) t = (CB)−1 CAP ∫ α0 (t−σ)Bu (σ) d σ + u (t) 0 Again using PB = 0 gives w (t) = u (t) , t ≥ 0 To address stability, since PB = 0 we see that P is not invertible. Thus AP is not invertible, which implies the second state equation is never exponentially stable. The scalar case with A = −1, B = C = 1 is uniformly bounded-input, bounded-output stable, but the resulting . z (t) = −v (t) . w (t) = v (t) + v (t) is not, as the bounded input v (t) = cos (e t ) shows. -53- CHAPTER 13 Solution 13.1 Suppose n = 2 and A has complex eigenvalues. Let A= a 11 a 12 a 21 a 22 , b= b1 b2 Then A has eigenvalues a 11 +a 22 ± √(a 11 +a 22 )2 −4(a 11 a 22 −a 12 a 21 ) ____________________________________ 2 and since the eigenvalues are complex, (a 11 +a 22 )2 − 4(a 11 a 22 −a 12 a 21 ) = (a 11 −a 22 )2 + 4a 12 a 21 < 0 Supposing that det [ b (*) Ab ] = 0, we will show that if b ≠ 0 we get a contradiction. For 0 = det [ b Ab ] = a 21 b 21 − a 12 b 22 − (a 11 −a 22 )b 1 b 2 implies (a 11 −a 22 )2 b 21 b 22 = (a 21 b 21 −a 12 b 22 )2 (**) If b 1 = 0, b 2 ≠ 0, then (**) implies a 12 = 0, which contradicts (*). If b 1 ≠ 0, b 2 = 0, then (**) implies a 21 = 0, which contradicts (*). If b 1 ≠ 0, and b 2 ≠ 0, then multiplying (*) by b 21 b 22 and using (**) gives (a 21 b 21 −a 12 b 22 )2 + 4a 12 a 21 b 21 b 22 < 0 or, (a 21 b 21 +a 12 b 22 )2 < 0 which is a contradiction. Thus det [ b Ab ] ≠ 0 for every b ≠ 0. Conversely, suppose det [ b Ab ] ≠ 0 for every b ≠ 0. If A has real eigenvalues, let p be a left eigenvector of A corresponding to λ, and take b ≠ 0 such that b T p = 0. (Note b and p are real.) Then p TA = λ p T , p Tb = 0 which implies that the state equation is not controllable for this b, a contradiction. Therefore A cannot have real eigenvalues, so it must have complex eigenvalues. (For the more challenging version of the problem, we can show controllability for all nonzero b implies n = 2 by using a (real) P to transform A to real Jordan form. Then for n > 2 pick a left eigenvector of P −1 AP and a real b ≠ 0 such that p T P −1 b = 0 to obtain a contradiction.) -54- Linear System Theory, 2/E Solutions Manual Solution 13.4 We need to show that rank B AB . . . A n−1 B =n , if and only if the (n +p)-dimensional state equation . A 0 z (t) = z (t) + C 0 rank A B = n +p C D B u (t) D (+) (++) is controllable. First suppose (+) holds but (++) is not controllable. Then there exists a complex so such that rank Since rank [ so I−A so In −A 0 −C so Ip B < n +p D (*) B ] = n, this implies rank −C so Ip D <p In turn, this implies so = 0, so that (*) becomes rank −A 0 B < n +p −C 0 D and this contradicts the second rank condition in (+). Conversely, supposing (++) is controllable, then rank ... ... AB A 2 B CB CAB B D = n +p This implies rank B AB . . . A n−1 B =n in other words, the first rank condition in (+) holds. Now suppose A B rank < n +p C D Then rank so In −A 0 −C so Ip B D < n +p so = 0 that is, so In +p − A 0 C 0 B D < n +p so = 0 and this implies that (++) is not controllable. The contradiction shows that the second rank condition in (+) holds. rank Solution 13.5 Since J has a single eigenvalue λ, controllability is equivalent to the condition rank λ I−J B =n From the form of the matrix λ I−J it is clear that a necessary and sufficient condition for controllability is that the set of rows of B corresponding to zero rows of λ I−J must be a linearly independent set of 1 × m vectors. In the general Jordan form case, applying this condition for each eigenvalue λi gives a necessary and sufficient condition for controllability. (Note that independence of one set of such rows of B (corresponding to one distinct eigenvalue) from another set of such rows of B (corresponding to another distinct eigenvalue) is not required.) -55- Linear System Theory, 2/E Solutions Manual Solution 13.10 Since P −1 B (P −1 AP)P −1 B . . . (P −1 AP)n−1 P −1 B = P −1 B AB . . . A n−1 B and controllability indices are defined by a left-to-right linear independence search, it is clear that controllability indices are unaffected by state variable changes. For the second part, let rk be the number of linearly dependent columns in A k B that arise in the left-to-right column search of [ B AB . . . A n−1 B ]. Note r 0 = 0 since rank B = m. Then rk is the number of controllability indices that have value ≤ k. This is because for each of the rk columns of the form A k Bi that are dependent, we have ρi ≤ k, since for j > 0 the vector A k +j Bi also will be dependent on columns to its left. Thus for k = 1, . . . , m, rk −rk−1 gives the number of controllability indices with value k. Writing BG ABG . . . A k BG = B AB . . . A k B G 0 . . . 0 0 ... 0 G ... 0 . . . . . . . . . 0 ... G and using the invertibility of G shows that the same sequence of rk ’s are generated by left-to-right column search in [ BG ABG . . . A n−1 BG ]. Solution 13.11 For the time-invariant case, if p TA = p Tλ , p TB = 0 implies p = 0, then p T (A +BK) = p T λ , p T B = 0 obviously implies p = 0. Therefore controllability of the open-loop state equation implies controllability of the closed-loop state equation. In the time-varying case, suppose the open-loop state equation is controllable on [to , t f ]. Thus given x (to ) = xo there exists an input signal ua (t) such that the corresponding solution xa (t) satisfies xa (t f ) = 0. Then the closed-loop state equation . z (t) = [ A (t) + B (t)K (t) ] z (t) + B (t)v (t) with initial state z (to ) = xo and input va (t) = ua (t) − K (t) xa (t) has the solution z (t) = xa (t). Thus z (t f ) = 0. Since this argument applies for any xo , the closed-loop state equation is controllable on [to , t f ]. Solution 13.12 By controllability, we can apply a variable change to controller form, with  = Ao + Bo UP −1 = PAP −1 , B̂ = Bo R = PB Then we can choose K̂ such that  + B̂K̂ = 0 1 0 0 . . . . . . 0 0 −p 0 −p 1 Now we want to compute b̂ such that -56- ... 0 ... 0 . . . . . . ... 1 . . . −p n−1 Linear System Theory, 2/E Solutions Manual 0 0 . . . 0 1 B̂b̂ = Using × to denote various unimportant entries, set B̂b̂ = Bo Rb̂ = block diagonal 0 0 . . . 0 1 ρi × 1 , i = 1, . . . , m . 1 0 . . . 0 0 × 1 . . . 0 0 ... ... . . . ... ... × × . . b̂ = . × 1 0 0 . . . 0 1 This gives a set of equations of the form m 0 = b̂ 1 + Σ ×i b̂i i =2 m 0 = b̂ 2 + Σ ×i b̂i i =3 . . . 0 = b̂m−1 + × b̂m 1 = b̂m Clearly there is a solution for the entries of b̂, regardless of the ×’s. Now it is easy to conclude controllability of the single-input state equation by calculation of the form of the controllability matrix. Then changing to the original state variables gives the result since controllability is preserved. In the original variables, take K = K̂P and b = b̂. For an example to show that b alone does not suffice, take Exercise 13.11 with all ×’s zero. Solution 13.14 Supposing the rank of the controllability matrix is q, Theorem 13.1 gives an invertible Pa such that P −1 a APa =  11  12 0  22 , P −1 a B = B̂ 1 0 , CPa = Ĉ 1 Ĉ 2 where  11 is q × q and the state equation defined by Ĉ 1 ,  11 , B̂ 1 is controllable. Now suppose Ĉ 1 Ĉ 1  11 . . . rank n−1 Ĉ 1  11 =l Applying Theorem 13.12 there is an invertible Pb such that with P= Pb 0 0 In−q we have -57- Linear System Theory, 2/E Solutions Manual P −1 (P −1 a APa )P = CPa P = à 11 0 à 13 à 21 à 22 à 23 0 0 à 33 C̃ 1 0 C̃ 2 , P −1 (P −1 a B) = B̃ 1 B̃ 2 0 (*) where à 11 is l × l, and in fact à 33 =  22 , C̃ 2 = Ĉ 2 . It is easy to see that the state equation formed from C̃ 1 , à 11 , B̃ 1 is both controllable and observable. Also an easy calculation using block triangular structure shows that the impulse response of the state equation defined by (*) is C̃ 1 e A˜ 11 t B̃ 1 It remains only to show that l = s. Using the effect of variable changes on the controllability and observability matrices and the special structure of (*) give C CA . . . CA n−1 B AB . . . A n−1 B = C̃ 1 C̃ 1 à 11 . . . n−1 C̃ 1 à 11 n−1 B̃ 1 à 11 B̃ 1 . . . à 11 B̃ 1 Thus rank C̃ 1 C̃ 1 à 11 . . . n−1 C̃ 1 à 11 B̃ 1 à 11 B̃ 1 . . . n−1 à 11 B̃ 1 =s But rank C̃ 1 C̃ 1 à 11 . . . l−1 C̃ 1 à 11 = rank l−1 B̃ 1 à 11 B̃ 1 . . . à 11 B̃ 1 and so we must have l = s. -58- =l CHAPTER 14 Solution 14.2 For any t f > 0, tf W = ∫ e −At BB T e −A t dt T 0 is symmetric and positive definite by controllability, and tf AW + WA = − ∫ T 0 =−e _d_ dt −At f e −At BB T e −A BB T e −A T t f T t dt + BB T Letting K = −B T W −1 , we have (A + BK)W + W (A + BK)T = − ( e −At f BB T e −A T t f + BB T ) (*) Suppose λ is an eigenvalue of A +BK. Then λ is an eigenvalue of (A+BK)T , and we let p ≠ 0 be a corresponding eigenvector. Then (A + BK)T p = λ p Also, _ p H (A + BK) = λ p H Pre- and post-multiplying (*) by p H and p, respectively, gives 2Re [λ] p H W p ≤ 0 which implies Re [λ] ≤ 0. Further, if Re [λ] = 0, then p H (e −At f BB T e −A T t f + BB T ) p = 0 Thus p H B = 0, and this gives _ p H AB = p H (A + BK − BK)B = p H (A + BK)B − p H BKB = λ p H B = 0 Continuing this calculation for p H A 2 B, and so on, gives p H B AB . . . A n−1 B which contradicts controllability of the given state equation. -59- =0 Linear System Theory, 2/E Solutions Manual Solution 14.5 (a) For any n × 1 vector x, x H (A + A T ) x = x H A x + x H A T x ≥ −2αm x H x If λ is an eigenvalue of A, and x is a unity-norm eigenvector corresponding to λ, then _ A x = λ x , x HAT = λ x H and we conclude _ λ + λ ≥ −2 αm Therefore any eigenvalue of A satisfies Re [λ] ≥ −αm , and this implies that for α > αm all eigenvalues of A +α I have positive real parts. Therefore all eigenvalues of −(A T +α I) = (−A−α I)T have negative real parts. (b) Using Theorem 7.11, with α > αm , the unique solution of Q (−A − α I)T + (−A − α I)Q = −BB T (*) is ∞ Q = ∫ e −(A +α I)t BB T e −(A T +α I)t dt 0 Clearly Q is positive semidefinite. If x T Qx = 0, then x T e −(A +α I)t B = 0 , t ≥ 0 and the usual sequential differentiation and evaluation at t = 0 gives a contradiction to controllability. Thus Q is positive definite. (c) Now consider the linear state equation . z (t) = ( A+α I−BB T Q −1 )z (t) (**) Using (*) to write BB T Q −1 gives . T z (t) = −Q (A+α I ) Q −1 z (t) But Q [ −(A + α I)T ]Q −1 has negative-real-part eigenvalues, which proves that (**) is exponentially stable. (d) Invoking Lemma 14.6 gives that . z (t) = ( A−BB T Q −1 )z (t) is exponentially stable with rate α > αm . Solution 14.6 Given a controllable linear state equation . x (t) = A x (t) + Bu (t) by Exercise 13.12 we can choose an m × n matrix K and an m × 1 vector b such that . x (t) = (A+BK )x (t) + (Bb )u (t) is a controllable single-input state equation. By a single-input controller form calculation, it is clear that we can choose a 1 × n gain k that yields a closed-loop state equation with any specified characteristic polynomial. That is, -60- Linear System Theory, 2/E Solutions Manual A+BK+Bbk = A+B (K+bk) has the specified characteristic polynomial. Thus for the original state equation, the feedback law u (t) = (K+bk) x (t) yields a closed-loop state equation with specified characteristic polynomial. Solution 14.8 Without loss of generality we can assume the change of variables in Theorem 13.1 has been performed so that A= A 11 A 12 0 A 22 , B= B1 0 where A 11 is q × q, and rank λ I−A 11 B1 = q for all complex values of λ. Then the eigenvalues of A comprise the eigenvalues of A 11 and the eigenvalues of A 22 . Also, for any complex λ, rank λ I−A B = rank λ I−A 11 0 −A 12 B 1 λ I−A 22 0 = q + rank λ I−A 22 (+) Now suppose rank [λ I−A B ] = n for all nonnegative-real-part eigenvalues of A. Then by (+) any such eigenvalue must be an eigenvalue of A 11 , which implies that all eigenvalues of A 22 have negative real parts. But we can compute an m × q matrix K 1 such that A 11 + B 1 K 1 has negative-real-part-eigenvalues. So setting K = [ K 1 0 ] we have that A + BK = A 11 +B 1 K 1 A 12 0 A 22 has negative-real-part eigenvalues. On the other hand, if there exists a K = [K 1 K 2 ] such that A + BK = A 11 +B 1 K 1 A 12 +B 1 K 2 0 A 22 has negative-real-part eigenvalues, then A 22 has negative-real-part eigenvalues. Thus if Re [λ] ≥ 0, then (+) gives rank λ I−A B = q+n−q = n Solution 14.9 For controllability assume A and B have been transformed to controller form by a state variable change. By Exercise 13.10 this does not alter the controllability indices. Then it is easy to show that A+BLC and B are in controller form with the same block sizes, regardless of L and C. Thus the controllability indices do not change. Similar arguments apply in the case of observability. Solution 14.10 For any L, using properties of the trace, -61- Linear System Theory, 2/E Solutions Manual tr [A+BLC ] = tr [A ] + tr [BLC ] = tr [A ] + tr [CBL ] = tr [A ] >0 Thus at least one eigenvalue of A+BLC has positive real part, regardless of L. Solution 14.12 Write the k th -row of G(s) in terms of the k th -row Ck of C as Ck (sI − A)−1 B = ∞ Ck A j Bs −(j+1) Σ j =0 The k th -relative degree κk is such that, since L Aj [Ck ](t)B (t) = Ck A j B, κ −2 Ck B = . . . = Ck A k B = 0 Ck A κk −1 B≠0 Thus in the k th -row of G(s), the minimum difference between the numerator and denominator polynomial degrees among the entries Gk1 (s), . . . , Gkm (s) is κk . -62- CHAPTER 15 Solution 15.2 The closed-loop state equation can be written as . x (t) = Ax(t) + BMz(t) + BNv(t) = Ax(t) + BMz(t) + BNC [Lz(t)+x(t)] . z (t) = Fz(t) + GC [Lz(t)+x(t)] Making the variable change w (t) = x (t)+Lz (t) gives the description . w (t) = Ax(t) + BMz(t) + BNCw(t) + LFz (t) + LGCw (t) = Ax(t) + [BM+LF ]z(t) + [BN+LG ]Cw (t) = [A−HC ]w (t) . z (t) = Fz(t) + GCw(t) Thus the closed-loop state equation in matrix form is . A−HC 0 w (t) = . GC F z (t) w (t) z (t) and the result is clear. Solution 15.4 Following the hint, write for any τ τ+ δ ∫ τ+ δ B (σ)2 ∫ dσ = τ τ Φ(σ, τ)Φ(τ, σ)B (σ)B T (σ)ΦT (τ, σ)ΦT (σ, τ) d σ τ+ δ ≤ ∫ τ Φ(σ, τ)2 Φ(τ, σ)B (σ)B T (σ)ΦT (τ, σ) d σ Since A (t) is bounded, by Exercise 6.6 there is a positive constant γ such that And since τ+ δ ∫ τ Φ(τ, σ)B (σ)B T (σ)ΦT (τ, σ) d σ ≤ ε1 I Exercise 1.21 gives, for any τ, -63- Φ(σ, τ)2 ≤ γ 2 for σ ∈ [τ, τ+δ]. Linear System Theory, 2/E Solutions Manual τ+ δ ∫ τ+ δ B (σ)2 τ dσ ≤ γ2 ∫ Φ(τ, σ)B (σ)B T (σ)ΦT (τ, σ) d σ τ ∆ ≤ γ 2 n ε1 = β1 Now for any τ, and t ∈ [τ+k δ, τ+(k +1)δ], k = 0, 1, . . . , τ+(k +1)δ t ∫ B (σ)2 τ ∫ dσ ≤ ≤ B (σ)2 dσ τ k τ+(j +1)δ Σ +j∫ j =0 τ B (σ)2 dσ δ ≤ (k +1) β1 ≤ [1 + (t−τ)/ δ ] β1 This bound is independent of k, so letting β2 = β1 /δ we have t ∫ B (σ)2 d σ ≤ β1 + β2 (t−τ) τ for all t, τ with t ≥ τ. (Of course this provides a simplification of the hypotheses of Theorem 15.5 for the bounded-A (t) case.) Solution 15.6 Write the given state equation in the partitioned form . za (t) = . zb (t) A 11 A 12 A 21 A 22 y (t) = Ip 0 za (t) + zb (t) B1 B2 u(t) za (t) zb (t) and the reduced-dimension observer and feedback in the form . zc (t) = [ A 22 − H A 12 ] zc (t) + [ B 2 − HB 1 ]u(t) + [ A 21 + (A 22 −H A 12 )H − H A 11 ] za (t) ẑb (t) = zc (t) + Hza (t) u(t) = K 1 za (t) + K 2 ẑb (t) + Nr(t) It is predictable that writing the overall closed-loop state equation in terms of the variables za (t), zb (t), and eb (t) = zb (t)−ẑb (t) is revealing. This gives . za (t) A 11 +B 1 K 1 A 12 +B 1 K 2 −B 1 K 2 za (t) B1N . z (t) = A 21 +B 2 K 1 A 22 +B 2 K 2 −B 2 K 2 zb (t) + B 2 N r(t) b . 0 0 A 22 −HA 12 eb (t) 0 eb (t) y (t) = Ip 0 0 za (t) zb (t) eb (t) Thus we see that the eigenvalues of the closed-loop state equation are provided by the n eigenvalues of A +BK and the (n−p) eigenvalues of A 22 −H A 12 . Furthermore, the block triangular structure gives the closed-loop transfer function as -64- Linear System Theory, 2/E Solutions Manual 0 (sI−A−BK )−1 BN R(s) Y(s) = Ip which is the same as if a static state feedback gain K is used. Solution 15.9 Similar in style to Solution 14.8. Solution 15.10 Since u = Hz + Jv = Hz + JC 2 x + JD 21 r + JD 22 u ∆ we assume that I−JD 22 is invertible, and let L = (I−JD 22 )−1 to write u = LHz + LJC 2 x + LJD 21 r Then, substituting for u, . x = (A+BLJC 2 )x + BLHz + BLJD 21 r . z = (GC 2 +GD 22 LJC 2 )x + (F+GD 22 LH)z + (GD 22 +GD 22 LJD 21 )r y = (C 1 +D 1 LJC 2 )x + D 1 LHz + D 1 LJD 21 r This gives the closed-loop coefficients  = A+BLJC 2 BLH , GC 2 +GD 22 LJC 2 F+GD 22 LH Ĉ = C 1 +D 1 LJC 2 D 1 LH B̂ = BLJD 21 GD 22 +GD 22 LJD 21 , D̂ = D 1 LJD 21 These expressions can be rewritten using L = (I − JD 22 )−1 = I + J (I−D 22 J)−1 D 22 which follows from Exercise 28.2 or is easily verified using the identity in Exercise 28.1. -65- CHAPTER 16 Solution 16.4 By Theorem 16.16 there exist polynomial matrices X (s), Y (s), A (s), and B (s) such that N(s) X(s) + D(s)Y(s) = Ip (*) Na (s) A(s) + Da (s)B(s) = Ip (**) −1 Since D −1 (s)N(s) = D −1 a (s)Na (s), Na (s) = Da (s)D (s)N(s). Substituting this into (**) gives Da (s)D −1 (s)N(s) A(s) + Da (s)B(s) = Ip that is, N(s) A(s) + D(s)B(s) = D(s)D −1 a (s) Similarly, N(s) = D(s)D −1 a (s)Na (s), and substituting into (*) gives Na (s) X(s) + Da (s)Y(s) = Da (s)D −1 (s) −1 −1 Therefore D(s)D −1 both are polynomial matrices, and thus both are unimodular. a (s) and [ D(s)D a (s) ] Solution 16.5 From the given equality, NL (s)D(s) − DL (s)N(s) = 0 and since N (s) and D (s) are right coprime there exist polynomial matrices X(s) and Y(s) such that X(s)D(s) − Y(s)N(s) = I Putting these two equations together gives X(s) Y(s) NL (s) DL (s) D(s) −N(s) = I 0 It remains only to prove unimodularity. Since NL (s) and DL (s) are left coprime, there exist polynomial matrices A(s) and B(s) such that DL (s) A(s) + NL (s)B(s) = I That is, X(s) Y(s) NL (s) DL (s) D(s) B(s) = −N(s) A(s) Multiplying on the right by -66- I X(s)B(s)+Y(s)A(s) 0 I Linear System Theory, 2/E Solutions Manual I −[X(s)B(s)+Y(s)A(s)] 0 I gives X(s) Y(s) NL (s) DL (s) D(s) −D(s)[X(s)B(s)+Y(s)A(s)]+B(s) = I −N(s) N(s)[X(s)B(s)+Y(s)A(s)]+A(s) That is X(s) Y(s) NL (s) DL (s) −1 = D(s) −D(s)(X(s)B(s)+Y(s)A(s))+B(s) −N(s) N(s)(X(s)B(s)+Y(s)A(s))+A(s) which is another polynomial matrix. Thus X(s) Y(s) NL (s) DL (s) is unimodular. Solution 16.7 The relationship (P ρ s+P ρ−1 )−1 = R 1 s+R 0 holds if R 1 and R 0 are such that I = (P ρ s+P ρ−1 ) (R 1 s+R 0 ) = P ρ R 1 s 2 + (P ρ R 0 +P ρ−1 R 1 )s + P ρ−1 R 0 −1 −1 Taking R 0 = P −1 ρ−1 and R 1 = −P ρ−1 P ρ P ρ−1 , it remains to verify that P ρ R 1 = 0. We have I = (P ρ s ρ + . . . + P 0 ) (Q η s η + . . . + Q 0 ) = P ρ Q η s η+ρ + (P ρ Q η−1 +P ρ−1 Q η )s η+ρ−1 + . . . with P ρ−1 and Q η−1 invertible. Therefore PρQη = 0 , P ρ Q η−1 +P ρ−1 Q η = 0 (+) The second equation gives P ρ = −P ρ−1 Q η Q −1 η−1 Then we can write −1 R 1 = Q η Q −1 η−1 P ρ−1 and the first equation in (+) gives P ρ R 1 = 0. In summary, −1 −1 (P ρ s+P ρ−1 )−1 = Q η Q −1 η−1 P ρ−1 s + P ρ−1 and thus P ρ s+P ρ−1 is unimodular. −1 Since N(s)D −1 (s) = Ñ(s)D̃ (s) both are coprime right polynomial fraction descriptions, there exists a unimodular U(s) such that D(s) = D̃(s)U(s). Suppose for some integer 1 ≤ J ≤ m we have Solution 16.10 ck [D ] = ck [D̃] , k = 1, . . . , J−1 ; cJ [D ] < cJ [D̃] Writing D(s) and D̃(s) in terms of columns Dk (s) and D̃k (s) and writing the (i, j)-entry of U(s) as uij (s) give -67- Linear System Theory, 2/E Solutions Manual Dk (s) = D̃ 1 (s)u 1,k (s) + . . . + D̃J (s)uJ,k (s) + . . . + D̃m (s)um,k (s) , k = 1, . . . , m Using a similar column notation for D hc and D l (s) gives D hc k s ck [D ] hc c 1 [D˜ ] + D lk (s) = [D̃ 1 s ˜ l hc c [D ] l +D̃ 1 (s)] u 1,k (s) + . . . + [D̃ J s J +D̃ J (s)] uJ,k (s) ˜ hc c [D ] l + . . . + [D̃ m s m +D̃ m (s)] um,k (s) , k = 1, . . . , m We claim that ck [D ] = max j = 1, . . . , m { c j [D̃]+degree u j,k (s) } hc hc This is shown by a an argument using linear independence of D̃ 1 , . . . , D̃ m as follows. Let c̃ = max j = 1, . . . , m and let µ j,k be the coefficient of s term on the right side is c̃−c j [D˜ ] { c j [D̃]+degree u j,k (s) } in u j,k (s). Then not all the µ j,k are zero, and the vector coefficient of the s c̃ m Σ µ j,k D̃ j j =1 hc By linear independence this sum is nonzero, which implies ck [D ] = c̃. Now, using the definition of J, ck [D ] < cJ [D̃] ≤ . . . ≤ cm [D̃] , k = 1, . . . , J−1 and this implies uJ,k (s) = . . . = um,k (s) = 0. Thus U (s) has the form U (s) = Ua (s) 0(m−J+1) × J Ub (s) Uc (s) where Ua (s) is (J−1) × J, from which rank U (s) ≤ m−1 for all values of s. This contradicts unimodularity, Thus cJ [D ] = cJ [D̃]. The proof is complete since the roles of D (s) and D̃(s) can be reversed. -68- CHAPTER 17 Solution 17.1 If . x (t) = A x (t) + Bu (t) y (t) = C x (t) T is a realization of G (s), then . z (t) = A T x (t) + C T v (t) w (t) = B T z (t) is a realization for G (s) since T T G (s) = [ G T (s) ] = [ C (sI − A)−1 B ] = B T (sI − A T )−1 C T Furthermore, easy calculation of the controllability and observability matrices of the two realizations shows that one is minimal if and only if the other is. Now, if N (s) and D (s) give a coprime left polynomial fraction description for G(s), then there exist polynomial matrices X (s) and Y (s) such that N (s) X (s) + D(s)Y(s) = I Therefore X T (s)N T (s) + Y T (s)D T (s) = I which implies that N T (s) and D T (s) are right coprime. Also, since D(s) is row reduced, D T (s) is column reduced. Thus we can write down a controller-form minimal realization for G T (s) = N T (s)[ D T (s) ]−1 as per Theorem 17.4, and this provides a minimal realization for G (s) by the correspondence above. Solution 17.3 Proof of Theorem 17.7: From Theorem 13.17 we have Q −1 AQ = A To + Q −1 VBTo , CQ = SB To Transposing (6) gives ∆(s)B To = sΨT (s) − ΨT (s)A To and (13) implies ∆(s) = D (s)S + ΨT (s)Q −1 V Substituting into (+) gives D (s)SB To = ΨT (s) [ sI − A To − Q −1 VBTo ] -69- (+) Linear System Theory, 2/E Solutions Manual Therefore SB To [ sI − A To − Q −1 VBTo ] −1 = D −1 (s)ΨT (s) Using the definition of N (s), −1 D −1 (s)N (s) = SB To [ sI − (A To + Q −1 VBTo ) ] Q −1 B −1 = CQ [ sI − Q −1 AQ ] Q −1 B = C (sI − A)−1 B Note that D (s) is row reduced since Dlr = S −1 , which is invertible. Finally, if the state equation is controllable as well as observable, hence minimal, then it is clear from the definition of D (s) that the degree of the polynomial fraction description equals the dimension of the minimal realization. Therefore D −1 (s)N (s) is a coprime left polynomial fraction description. Solution 17.5 Suppose there is a nonzero h with the property that for each uo there is an xo such that t hCe At xo + ∫ hCe A (t−σ) Buo e so σ dσ = 0 , t ≥ 0 0 Suppose G (s) = N (s)D −1 (s) is a coprime right polynomial fraction description. Then taking Laplace transforms gives hC (sI − A)−1 xo + hN (s)D −1 (s)uo (s−so )−1 = 0 that is, (s−so )hC (sI − A)−1 xo + hN (s)D −1 (s)uo = 0 If so is not a pole of G (s), then D (so ) is invertible. Thus evaluating at s = so gives hN (so )D −1 (so )uo = 0 and we have that if so is not a pole of G (s), then for every ũo hN (so )ũo = 0 Thus hN (so ) = 0, that is rank N (so ) < p < m, which implies that so is a transmission zero. Conversely, suppose so is a transmission zero that is not a pole of G (s). Then for a right-coprime polynomial fraction description G (s) = N (s)D −1 (s) we have that D (so ) is invertible, and rank N (so ) < p < m. Thus there exists a nonzero 1 × p vector h such that hN (so ) = 0. Using the identity (just as in the proof of Theorem 17.13) (so I − A)−1 (s − so )−1 = (sI − A)−1 (so I − A)−1 + (sI − A)−1 (s−so )−1 we can write for any uo and the choice xo = (so I − A)−1 Buo , L t hCe At xo + ∫ hCe A (t−σ) Buo e 0 so σ d σ = hN (so )D −1 (so )uo (s−so )−1 = 0 That is, h has the property that for any uo there is an xo such that t hCe xo + ∫ hCe A (t−σ) Buo e At 0 -70- so σ dσ = 0 , t ≥ 0 Linear System Theory, 2/E Solutions Manual Solution 17.9 Using a coprime right polynomial fraction description G (s) = N (s)D −1 (s) = N (s) adj D (s) ____________ det D (s) suppose for some i, j and complex so we have ∞ = Gij (so ) = [ N (so ) adj D (so ) ]ij ___________________ det D (so ) Since the numerator is the magnitude of a polynomial, it is finite for every so , and this implies det D (so ) = 0, that is, so is a pole of G (s). Now suppose so is such that det D (so ) = 0. By coprimeness of the right polynomial fraction description N (s)D −1 (s), there exist polynomial matrices X (s) and Y (s) such that X(s)N(s) + Y(s)D(s) = Im for all s. Therefore [ X(s)G(s) + Y(s) ] D(s) = Im for all s, and thus det [ X(s)G(s) + Y(s) ] det D(s) = 1 for all s. This implies that at s = so we must have det [ X(so )G (so ) + Y (so ) ] = ∞ Since the entries of the polynomial matrices X (so ) and Y (so ) are finite, some entry of G (so ) must have infinite magnitude. -71- CHAPTER 18 Solution 18.2 (a) If x ∈ A (A −1 V ), then clearly x ∈ Im [A ], and there exists y ∈ A −1 V such that x = Ay, which implies x ∈ V. Therefore A (A −1 V ) ⊂ V ∩ Im [A ]. Conversely, suppose x ∈ V ∩ Im [A ]. Then x ∈ Im [A ] implies there exists y such that x = Ay, and x ∈ V implies y ∈ A −1 V. Thus x ∈ A (A −1 V ), that is, V ∩ Im [A ] ⊂ A (A −1 V ). (b) If x ∈ V + Ker [A ], then we can write x = xa + xb , xa ∈ V , xb ∈ Ker [A ] −1 and Ax = Axa ∈ AV. Thus x ∈ A (AV ), which gives V + Ker [A ] ⊂ A −1 (AV ). Conversely, if x ∈ A −1 (AV ), then there exists y ∈ V such that Ax = Ay, that is, A (x−y) = 0. Thus writing x = y + (x−y) ∈ V + Ker [A ] −1 gives A (AV ) ⊂ V + Ker [A ]. (c) If AV ⊂ W, then using (b) gives A −1 (AV ) = V + Ker [A ] ⊂ A −1 W. Thus V ⊂ A −1 W. Conversely, V ⊂ A −1 W implies, using (a), AV ⊂ A (A −1 W ) = W ∩ Im [A ] Therefore AV ⊂ W. Solution 18.4 For x ∈ Wa ∩ V + Wb ∩ V, write x = x a + x b , x a ∈ Wa ∩ V , x b ∈ Wb ∩ V Then xa , xb ∈ V, and xa ∈ Wa , xb ∈ Wb , which imply xa + xb ∈ V and xa + xb ∈ Wa + Wb , that is, x = xa + xb ∈ (Wa + Wb ) ∩ V and we have shown that Wa ∩ V + Wb ∩ V ⊂ (Wa +Wb ) ∩ V For the second part, if Wa ⊂ V, then x ∈ (Wa + Wb ) ∩ V implies x ∈ V and x ∈ Wa + Wb . We can write x = x a + x b , x a ∈ Wa ⊂ V , x b ∈ Wb But x − xa = xb ∈ V, so we have x ∈ Wa + Wb ∩ V. This gives (Wa + Wb ) ∩ V ⊂ Wa + Wb ∩ V The reverse containment follows from the first part since Wa ⊂ V implies Wa = Wa ∩ V. -72- Linear System Theory, 2/E Solutions Manual Solution 18.9 Clearly C <A | B> = Y if and only if rank C B AB . . . A n−1 B =p and thus the proof involves showing that the rank condition is equivalent to positive definiteness of tf ∫ Ce A (t −t) BB T e A (t −t) C T dt T f f 0 This is carried out in Solution 9.11. Solution 18.10 We show equivalence of the negations. First suppose 0 ≠ V ⊂ Ker [C ] is a controlled invariant subspace. Then picking a friend F of V we have (A + BF)V ⊂ V ⊂ Ker [C ] Selecting 0 ≠ xo ∈ V, this gives e (A + BF)t xo ∈ V , t ≥ 0 and thus Ce (A + BF)t xo = 0 , t ≥ 0 Thus the closed-loop state equation is not observable, since the zero-input response to xo ≠ 0 is identical to the zero-input response to the zero initial state. Conversely, suppose the closed-loop state equation is not observable for some F. Then n−1 N = ∩ Ker [C (A + BF)k ] ≠ 0 k =0 Thus 0≠xo ∈ N implies, using the Cayley-Hamilton theorem, 0 = Cxo = C (A + BF) xo = C (A + BF)2 xo = . . . That is, (A + BF) xo ∈ N, which gives (A + BF)N ⊂ N. Clearly N ⊂ Ker [C ], so N is a nonzero controlled invariant subspace contained in Ker [C ]. Let P 1 , . . . , Pq be a basis for B ∩ R = Im [B 1 ] + . . . + Im [Bq ] , P 1 , . . . , Pr be a basis for R , P 1 , . . . , Pc be a basis for <A | B> , P 1 , . . . , Pn be a basis for X . Then for i = 1, . . . , q, Bi ∈ B ∩ R , and for i = q +1, . . . , m, Bi ∉ R , Bi ∈ <A | B>. Thus P −1 B = B̂ has the form Solution 18.11 B̂ = B̂ 11 B̂ 12 B̂ 22 B̂ 32 0(r−q) × q 0(c−r) × q 0(n−c) × q 0(n−c) × (m−q) If B 1 , . . . , Bq are linearly independent and we choose P j = B j , j = 1, . . . , q, then B̂ 11 = Iq . Finally, since <A | B> is invariant for A,  11  12 P −1 AP = 0c × (n−c)  22 -73- CHAPTER 19 Solution 19.1 First we show ( W + S )⊥ = W⊥ ∩ S⊥ An n × 1 vector x satisfies x ∈ ( W + S ) ⊥ if and only if x T (w + s) = 0 for all w ∈ W and s ∈ S . This is equivalent to x T w + x T s = 0 for all w ∈ W and s ∈ S , and by taking first s = 0 and then w = 0 this is equivalent to x T w = 0 for all w ∈ W and x T s = 0 for all s ∈ S . These conditions hold if and only if x ∈ W ⊥ and x ∈ S ⊥ , that is, x ∈ W ⊥ ∩ S ⊥. Next we show ( A T S ) ⊥ = A −1 S ⊥ An n × 1 vector x satisfies x ∈ ( A T S ) ⊥ if and only if x T y = 0 for all y ∈ A T S , which holds if and only if x T A T z = 0 for all z ∈ S, which is the same as (Ax)T z = 0 for all z ∈ S , which is equivalent to Ax ∈ S ⊥ , which is equivalent to x ∈ A −1 S ⊥ . Finally we prove that ( S ⊥ ) ⊥ = S . It is easy to show that S ⊂ ( S ⊥ ) ⊥ since x ∈ S implies y T x = 0 for all ⊥ y ∈ S , that is, x T y = 0 for all y ∈ S ⊥ , which implies x ∈ ( S ⊥ ) ⊥ . To show ( S ⊥ ) ⊥ ⊂ S , suppose 0 ≠ x ∈ ( S ⊥ ) ⊥ . Then for all y ∈ S ⊥ we have x T y = 0. That is, if y T z = 0 for all z ∈ S , then x T y = 0. Equivalently, if z T y = 0 for all z ∈ S , then x T y = 0. Thus Ker z T = Ker zT , xT for all z ∈ S (*) This implies x ∈ S , for if not, then for any z ∈ S , rank z T < rank zT xT By the matrix fact in the Hint, this implies dim Ker zT < dim Ker xT zT which contradicts (*). Solution 19.2 By induction we will show that (W k ) ⊥ = V k , where V k is generated by the algorithm for V * in Theorem 19.3: -74- Linear System Theory, 2/E Solutions Manual V0 = K V k +1 = K ∩ A −1 (V k + B ) = V k ∩ A −1 (V k + B ) For k = 0 the claim becomes ( K ⊥ ) ⊥ = K , which is established in Exercise 19.1. So suppose for some nonegative integer K we have (W K ) ⊥ = V K . Then, using Exercise 19.1, (W K +1 ) ⊥ = W K + AT[ W K ∩ B ⊥ ] = (W K ) ⊥ ∩ = VK ∩ ⊥ A T (W K ∩ B ⊥ ) ⊥ A T [ (V K ) ⊥ ∩ B ⊥ ] ⊥ But further use of Exercise 19.1 gives (V K ) ⊥ ∩ B ⊥ AT ⊥ = A −1 (V K ) ⊥ ∩ B ⊥ ⊥ = A −1 (V K + B) Thus (W K +1 ) ⊥ = V K ∩ A −1 (V K + B) = V K +1 This completes the induction proof, and gives V * = V n = (W n ) ⊥ . Solution 19.4 We establish the Hint by induction, for F a friend of V *. For k = 1, k Σ (A + BF) j−1 (B ∩ V *) = B ∩ V * = V * ∩ (A .0 + B ) j =1 = R1 Assume now that for some positive integer K we have K (A + BF) j−1 (B ∩ V *) = R K = V * ∩ (AR K−1 ∩ B ) Σ j =1 Then K +1 K (A + BF) j−1 (B ∩ V *) = B ∩ V * + (A + BF) Σ (A + BF) j−1 (B ∩ V *) Σ j =1 j =1 = B ∩ V * + (A + BF)R K From the algorithm, R ⊂ R ⊂ V *, thus K n (A + BF)R K ⊂ (A + BF)V * ⊂ V * Using the second part of Exercise 18.4 gives B ∩ V * + (A + BF)R K = [ B + (A + BF)R K ] ∩ V * Since (A + BF)R K + B = AR K + B, the right side of (+) can be rewritten as B ∩ V * + (A + BF)R K = V * ∩ [ AR K + B ] = R K +1 This completes the induction proof of the Hint, and Theorem 19.6 gives R * = R n . -75- (+) Linear System Theory, 2/E Solutions Manual Solution 19.7 The closed-loop state equation . x (t) = (A + BF)x (t) + (E + BK)w (t) + BGv (t) y (t) = Cx (t) is disturbance decoupled if and only if C (sI − A − BF)−1 (E + BK) = 0 That is, if and only if <A +BF Im [E +BK ]> ⊂ Ker [C ] (*) Thus we want to show that there exist F and K such that (*) holds if and only if Im [E ] ⊂ V * + B, where V * is the maximal controlled invariant subspace contained in Ker [C ] for the plant. First suppose F and K are such that (*) holds. Since <A +BF Im [E +BK ]> is invariant under (A + BF), it is a controlled invariant subspace contained in Ker [C ] for the plant. Then Im [E +BK ] ⊂ <A +BF Im [E +BK ]> ⊂ V * That is, for any x ∈ X there is a v ∈ V * such that (E + BK)x = v. Therefore Ex = v + B (−K x) which implies Im [E ] ⊂ V * + B. Conversely, suppose Im [E ] ⊂ V * + B, where V * is the maximal controlled invariant subspace contained in Ker [C ] for the plant. We first show how to compute K such that Im [E +BK ] ⊂ V *. Then we can pick any friend F of V * and the proof will be finished since we will have <A +BF Im [E +BK ]> ⊂ V * ⊂ Ker [C ] If w 1 , . . . , wq is a basis for W, then there exist v 1 , . . . , vq ∈ V * and u 1 , . . . , uq ∈ U such that Ew j = v j + Bu j , j = 1, . . . , q Let K = −u 1 . . . −uq w 1 . . . wq −1 Then (E + BK)w j = Ew j + BKw j = v j + Bu j + B −u 1 . . . −uq e j = v j , j = 1, . . . , q That is, K is such that Im [E + BK ] ⊂ V * Solution 19.11 Note first that span { pr +1 , . . . , pn } = R 2 * Since R 1 * ⊂ K 1 = Ker [C 2 ] and R 2 * ⊂ K 2 = Ker [C 1 ], we have that in the new coordinates, -76- Linear System Theory, 2/E Solutions Manual Ĉ 1 = C 1 P = Ĉ 11 0 0 Ĉ 2 = C 2 P = 0 Ĉ 11 0 Since Im [BG 1 ] ⊂ B ∩ R 1 * ⊂ R 1 * and BG 1 = PB̂ 1 we have B̂ 1 = B̂ 11 0 B̂ 13 0 B̂ 22 B̂ 23 Similarly, Im [BG 2 ] ⊂ B ∩ R 2 * ⊂ R 2 * gives B̂ 2 = Finally, (A + BF)R i * ⊂ R i *, i = 1, 2, and (A + BF)P = P give  =  11 0 0 0  22 0  31  32  33 That is, with z (t) = P −1 x (t), the closed-loop state equation takes the partitioned form . za (t) =  11 za (t) + B̂ 11 r 1 (t) . zb (t) =  22 zb (t) + B̂ 22 r 2 (t) . zc (t) =  31 za (t) +  32 zb (t) +  33 zc (t) + B̂ 13 r 1 (t) + B̂ 23 r 2 (t) y 1 (t) = Ĉ 11 za (t) y 2 (t) = Ĉ 12 zb (t) -77- CHAPTER 20 Solution 20.1 A sketch shows that v (t) is a sequence of unit-height rectangular pulses, occurring every T seconds, with the width of the k th pulse given by k/5, k = 0, . . . , 5. This is a piecewise-continuous (actually, piecewise-constant) input, and the continuous-time solution formula gives t z (t) = e F (t−to ) z (to ) + ∫ e F (t−σ) Gv (σ) d σ to Evaluate this at t = (k +1)T and to = kT to get (k +1)T z [(k +1)T ] = e FT z (kT) + ∫ e F (kT +T−σ) Gv (σ) d σ kT Let τ = kT+T−σ in the integral, to obtain T z [(k +1)T ] = e FT z (kT) + ∫ e F τ Gv (kT +T−τ) d τ 0 Then the special form of v (t) gives T z [(k +1)T ] = e FT z (kT) + ∫ e F τ G d τ sgn [u (k)] T−u (k)T The integral term is not linear in the input sequence u (k), so we approximate the integral when u (k) is small. Changing integration variable to γ = T−τ, another way to write the integral term is u (k)T e FT ∫ e −F γ G d γ sgn [u (k)] 0 For u (k) small, u (k)T ∫ 0 u (k)T e −F γ d γ = ∫ ( I−F γ+ . . . ) d γ ∼ u (k)T I 0 Then since u (k) sgn [u (k)] = u (k), this gives the approximate, linear, discrete-time state equation. z [(k +1)T ] = e FT z (kT) + e FT T u (k) Solution 20.4 For a constant nominal input u (k) = ũ, constant nominal solutions are given by -78- Linear System Theory, 2/E Solutions Manual x̃ = ũ 2 ũ ỹ = ũ , 2 Easy calculation gives the linearized state equation x δ (k +1) = −1 0 x (k) + 0 −1 δ 2 u δ (k) 4ũ −1 x δ (k) + 2ũ u δ (k) y δ (k) = 2ũ Since A k = (−1)k I and CB = 0, the zero-state solution formula easily gives y δ (k) = 2ũ u δ (k) Thus the zero-state behavior of the linearized state equation is that of a pure gain. Solution 20.10 Φ(k, j): Computing Φ( j +q, j) for the first few values of q ≥ 0 easily leads to the general formula for 0 a 1 (k−1)a 2 (k−2)a 1 (k−3)a 2 (k−4) . . . a 1 ( j) , . . . a 2 (k−1)a 1 (k−2)a 2 (k−3)a 1 (k−4) a 2 ( j) 0 a 1 (k−1)a 2 (k−2)a 1 (k−3)a 2 (k−4) . . . a 2 ( j) 0 0 , a 2 (k−1)a 1 (k−2)a 2 (k−3)a 1 (k−4) . . . a 1 ( j) Solution 20.11 By definition, for k ≥ j +1, ΦF (k, j) = F (k−1)F (k−2) . . . F ( j +1)F ( j) = A T (1−k)A T (2−k) . . . A T (−1−j)A T (−j) , k ≥ j +1 Therefore, for k ≥ j +1, Φ TF (k, j) = A (−j)A (−j−1) . . . A (−k+2)A (−k+1) However, for −j+1 ≥ −k+2, that is, k ≥ j +1, ΦA (−j+1, −k+1) = A (−j)A (−j−1) . . . A (−k+2)A (−k+1) and a comparison gives ΦA T (−k) (k, j) = Φ AT (k) (−j+1, −k+1) , k ≥ j +1 Solution 20.14 For k ≥ k 1 +1 ≥ ko +1 we can write, somewhat cleverly, Φ(k, k−1 ko ) (k−k 1 ) = Σ Φ(k, ko ) Φ(k, j) Φ( j, k) j =k 1 ≤ k−1 Σ j =k 1 Clearly this gives -79- k−j odd, ≥ 1 k−j even, ≥ 1 Linear System Theory, 2/E Φ(k, Solutions Manual ko ) ≤ 1 _____ k−k 1 k−1 Σ j) Φ( j, k) , k ≥ k 1 +1 ≥ ko +1 Φ(k, j =k 1 Solution 20.16 Given A (k) and F we want P (k) to satisfy F = P −1 (k +1)A (k)P (k) for all k. Assuming F is invertible and A (k) is invertible for every k, it is easy to verify that P (k) = ΦA (k, 0)F −k is the correct choice. Obviously if F = I, then the variable change is P (k) = ΦA (k, 0). Using this in Example 20.19, where A (k) = 1 a (k) 0 1 gives k−1 P (k) = ΦA (k, 0) = 1 Σ a (i) i =0 0 1 , k ≥1 and k−1 P −1 (k +1) = ΦA (0, k +1) = Then an easy multiplication verifies the property. -80- 1 − Σ a (i) i =0 0 1 , k ≥0 CHAPTER 21 Solution 21.3 Using z-transforms, (zI − A) −1 = z −1 12 z+7 −1 1 _________ 2 z +7z+12 = z +7 1 −12 z and Y (z) = zc(zI−A)−1 xo + c(zI−A)−1 b U (z) = z _________ −z−19 z−1 z 2 +7z+12 z z−1 ____ 1/ 20 _________ + 2 1/ 20 z +7z+12 z−1 =0 Therefore the complete solution is y (k) = 0, k ≥ 0. Solution 21.4 First compute the corresponding discrete-time state equation x ([(k +1)T ] = Fx (kT) + gu (kT) y (kT) = hx (kT) 2 Using A = 0, it is easy to compute F=e AT = T g = ∫ e Aσ b d σ = 1 T , 0 1 0 T2/2 T and h = c. The transfer functions are Y (s) _____ = c (sI−A)−1 b = 0 1 U (s) s −1 −1 0 = 1/ s 0 s 1 and Z [y (kT)] _________ = h (zI−F)−1 g = 0 1 Z [u (kT)] z−1 −T −1 T 2 / 2 = T / (z−1) T 0 z−1 Solution 21.7 (a) The solution formula gives, using a standard formula for a finite geometric sum, -81- Linear System Theory, 2/E Solutions Manual k−1 x (k) = (1+r/ l)k xo + Σ (1+r / l)k−j−1 b j =0 = (1+r/ l)k xo + b (1+r / l)k−1 1−1/(1+r / l)k ____________ 1−1/(1+r / l) = (1+r/ l)k (xo +bl / r) − bl / r (b) In one year a deposit xo yields x (l) = (1+r / l)l xo so effective interest rate = (1+r / l)l xo − xo _____________ × 100% = [(1+r / l)l − 1] × 100% xo For r = 0.05, l = 2, the effective interest rate is 5.06%. For r = 0.05, l = 12, the effective interest rate is 5.12%. (c) Set 0 = x (19) = (1.05)19 xo + 50,000 (−50,000) _______ _________ + 0.05 0.05 and solve to obtain xo = $604,266. Of course this means you have actually won only $654,266, but congratulations remain appropriate. Solution 21.9 With T = Td / l and v (t) = v (kT), kT ≤ t ≤ (k +1)T, evaluate the solution formula t z (t) = e F (t−τ) z (τ) + ∫ e F (t−σ) Gv (σ−Td ) d σ , t ≥ T τ at t = (k +1)T, τ = kT to obtain T z [(k +1)T ] =e FT z (kT) + ∫ e F τ d τ G v [(k−l)T ] 0 ∆ = Az (kT) + Bv [(k−l)T ] Defining x (k) = z (kT) v [(k−l)T ] . , . . v [(k−1)T ] u (k) = v (kT) , we get -82- ŷ(k) = y (kT) Linear System Theory, 2/E Solutions Manual x (k +1) = A 0 . . . 0 0 B 0 . . . 0 0 0 1 . . . 0 0 ... ... . . . ... ... 0 0 . . x (k) + . 1 0 0 0 . . u (k) , . 0 1 x (0) = z (0) v (−lT) . . . v (−2T) v (−T) ŷ(k) = C 0 . . . 0 x (k) The dimension of the initial state is n+l. The transfer function of this state equation is the same as the transfer function of z (k +1) = Az (k) + Bu (k−l) y (k) = Cz (k) Taking the z-transform, using the right shift property, gives Y (z) = C (zI−A)−1 Bz −l U (z) Solution 21.12 Easy calculation shows that for Ma = 1 0 , Mb = 0 0 0 1 0 0 Ma has a square root, with √Ma = Ma , but Mb does not. Solution 21.13 By Lemma 21.6, given any ko there is a K-periodic solution of the forced state equation if and only if there is an xo satisfying [I − Φ(ko +K, ko )]xo = ko +K−1 Σ Φ(ko +K, j +1)f ( j) (*) j =ko Similarly there is a K-periodic solution of the unforced state equation if and only if there is a zo satisfying [I − Φ(ko +K, ko )]zo = 0 (**) Since there is no zo ≠ 0 satisfying (**), it follows that [I−Φ(ko +K, ko )] is invertible. This implies that for each ko there exists a unique xo satisfying (*). For this xo the forced state equation has a K-periodic solution. However, if there is a zo ≠ 0 satisfying (**), (*) might still have a solution if the right side is in the range of [I−Φ(ko +K, ko )]. Solution 21.14 Since the forced state equation has no K-periodic solutions, for any ko there is by Exercise 21.13 a zo ≠ 0 such that the solution of z (k +1) = A (k)z (k) , z (ko ) = zo is K-periodic. Thus by Lemma 21.6, [I − Φ(ko +K, ko )]zo = 0 and therefore [I − Φ(ko +K, ko )] is not invertible. Since there are no solutions to [I − Φ(ko +K, ko )]xo = ko +K−1 Σ j =ko -83- Φ(ko +K, j +1)f ( j) Linear System Theory, 2/E Solutions Manual we have by linear algebra that there exits a nonzero, n × 1 vector p such that [I − Φ(ko +K, ko )]T p = 0 and ko +K−1 pT Σ j =k ∆ Φ(ko +K, j +1)f ( j) = q ≠ 0 o Now pick any xo . Then it is easy to show that the corresponding solution satisfies p T x(ko +jK) = p T xo +jq, j = 1, 2, . . . . This shows that the solution is unbounded. -84- CHAPTER 22 Solution 22.1 Similar to Solution 6.1. Solution 22.4 If the state equation is uniformly exponentially stable, then there exist γ ≥ 1 and 0 ≤ λ < 1 such that Φ(k, j) ≤ γ λk−j , k ≥ j Equivalently, for every k, Φ(k +j, k) ≤ γ λ j , j ≥ 0 which implies φ j = sup Φ(k +j, k) ≤ γ λ j k Then 1/j 1/j lim φ 1/j j = lim (γ λ) = λ lim γ j→∞ j→∞ j→∞ ≤λ <1 Now suppose lim (φ j )1/j < 1 j→∞ Picking 0 < ε < 1 there exists a positive integer J such that φ 1/j j < 1−ε , j ≥ J Let λ = 1−ε and γ= 1 ___ max [ max φ j , 1 ] J λ 1≤ j ≤J Then for j ≤ J, Φ(k +j, k) ≤ sup Φ(k +j, j) = φ j k ≤ max φ j ≤ γ λJ 1≤ j ≤J ≤ γ λj -85- Linear System Theory, 2/E Solutions Manual Similarly, for j > J, Φ(k +j, k) ≤ sup Φ(k +j, j) k = φj < (1−ε) j = λ j ≤ γ λj This implies uniform exponential stability. Solution 22.6 For λ = 0 the problem is trivial, so suppose λ ≠ 0 and write k k λk = k λk = k ( e lnλ ) , k ≥ 0 Let η = −lnλ, so that η > 0 since λ < 1. Then max k λk ≤ max t e −η t k≥0 t≥0 and a simple maximization argument (as in Exercise 6.10) gives max te −η t ≤ t≥0 1 ___ ηe Therefore k λk ≤ 1 ∆ ________ =β , k ≥ 0 −e ln λ To get a decaying exponential bound, write k λk = k ( √λ )k ( √λ )k = 2 β ( √λ )k , k ≥ 0 Then ∞ Σ k λk ≤ k =0 2β _______ 1− √λ For j > 1 write k j λk = k ( λ1/j +1 )k . . . k ( λ1/j +1 )k . ( λ1/j +1 )k and proceed as above. Solution 22.7 Use the fact from Exercise 20.11 that ΦA T (−k) (k, j) =Φ AT (k) (−j +1, −k +1) , k ≥ j Then A (k) is uniformly exponentially stable if and only if there exist γ ≥ 1 and 0 ≤ λ < 1 such that ΦA (k) (k, j) ≤ γ λk−j , k ≥ j T Φ A (k) (k, j) ≤ γ λk−j , k ≥ j This is equivalent to which is equivalent to T Φ A (−k) (−j +1, −k +1) ≤ γ λ(−j +1)−(−k +1)−j , −j +1 ≥ −k +1 -86- Linear System Theory, 2/E Solutions Manual which is equivalent to ΦA T (−k) (k, j) ≤ γ λk−j , k ≥ j which is equivalent to uniform exponential stability of A T (−k). However for the case of A T (k), consider the example where A (k) is 3-periodic with A (0) = 0 2 , A (1) = 1/ 2 0 0 1/ 2 , A (2) = 1/ 2 0 Then ΦA (k) (3, 0) = 1/ 2 0 0 1/ 2 and it is easy to conclude uniform exponential stability. However ΦA T (k) (3, 0) = and it is easy to see that there will be unbounded solutions. -87- 2 0 0 1/ 8 2 0 0 1/ 2 CHAPTER 23 With Q = qI, where q > 0 we compute A T (k)QA (k)−Q to get the sufficient condition for uniform exponential stability: Solution 23.1 a 21 (k), a 22 (k) ≤ 1− ν __ , ν>0 q Thus the state equation is uniformly exponentially stable if there exists a constant α < 1 such that for all k a 1 (k), a 2 (k) ≤ α With Q= q1 0 0 q2 where q 1 , q 2 > 0, the sufficient condition for uniform exponential stability becomes existence of a constant ν > 0 such that for all k, a 21 (k) ≤ q 2 −ν _____ , q1 a 22 (k) ≤ q 1 −ν _____ q2 These conclusions show uniform exponential stability under weaker conditions, where one bounded coefficient can be larger than unity if the other bounded coefficient is suitably small. For example, suppose sup | a 2 (k) = α < ∞. Then we can take q 1 = α2 +0.01, q 2 = 1, and ν = 0.01 to conclude uniform exponential k stability if a 21 (k) ≤ 0.99/ (α2 +0.01) for all k. Solution 23.4 Using the transition matrix computed in Exercise 20.10, an easy computation gives that ∞ Q (k) = I + Σ ΦT ( j, k)Φ( j, k) j =k +1 is a diagonal matrix with q 11 (k) = 1 + a 22 (k) + a 21 (k +1)a 22 (k) + a 22 (k +2)a 21 (k +1)a 22 (k) + a 21 (k +3)a 22 (k +2)a 21 (k +1)a 22 (k) + . . . q 22 (k) = 1 + a 21 (k) + a 22 (k +1)a 21 (k) + a 21 (k +2)a 22 (k +1)a 21 (k) + a 22 (k +3)a 21 (k +2)a 22 (k +1)a 21 (k) + . . . Since this Q (k) is guaranteed to satisfy I ≤ Q (k) and A T (k)Q (k)A (k)−Q (k) ≤ −I for all k, a sufficient condition for uniform exponential stability is existence of a constant ρ such that q 11 (k), q 22 (k) ≤ ρ for all k. Clearly this -88- Linear System Theory, 2/E Solutions Manual holds if a 21 (k), a 22 (k) ≤ α < 1 for all k, but it also holds under weaker conditions. For example suppose the αbound is violated only for k = 0, and a 21 (0) > 1 , a 21 (0)a 22 (1) < α Then we can conclude uniform exponential stability. (More sophisticated analyses should be possible . . . .) Solution 23.6 If the state equation is exponentially stable, then by Theorem 23.7 there is for any symmetric M a unique symmetric Q such that A T QA − Q = −M Write M= m1 m2 m2 m3 , Q= q1 q2 q2 q3 and write the discrete-time Lyapunov equation as the vector equation 0 a 20 −1 0 −1−a 0 a 0 0 1 −2 q1 q2 q3 = −m 1 −m 2 −m 3 The condition det 0 a 20 −1 0 −1−a 0 a 0 0 1 −2 ≠0 reduces to the condition a 0 ≠ 0, 1, −2. Assuming this condition we compute Q for M = I, and use the fact that Q > 0 since M > 0. The expression q1 q2 q3 = −1 0 a 20 0 −1−a 0 a 0 1 −2 0 −1 −1 0 −1 gives Q= 1 ______________ a 0 (a 0 +2)(a 0 −1) −a 0 (a 20 +a 0 +2) −2a 0 −2a 0 −2(a 0 +1) By Sylvester’s criterion, Q > 0 if and only if a 0 (a 0 +2) > 0 , (1−a 0 )(a 0 +2) > 0 (+) Note that these conditions subsume the conditions assumed above. Now suppose the conditions in (+) hold. Then for M = I > 0 there is a solution Q > 0 to the discrete-time Lyapunov equation. Thus the state equation is exponentially stable. That is, the conditions in (+) are necessary and sufficient for exponential stability. Solution 23.10 Suppose λ is an eigenvalue of A with eigenvector p. Then since M, Q ≥ 0 satisfy A T QA − Q = −M we have -89- Linear System Theory, 2/E Solutions Manual p H A T QAp − p H Qp = −p H Mp That is, ( λ2 −1 )p H Qp = −p H Mp If p H Mp > 0, then λ2 −1 < 0, which gives λ < 1. But suppose p H Mp = 0. Then for k ≥ 0, _ 0 = λ2k p H Mp = λk p H Mp λk = p H (A T )k MA k p = (Re [p ])T (A T )k MA k (Re [p ]) + (Im [p ])T (A T )k MA k (Im [p ]) Since M ≥ 0, this implies 0 = (Re [p ])T (A T )k MA k (Re [p ]) = (Im [p ])T (A T )k MA k (Im [p ]) By hypothesis this implies lim A k (Re [p ]) = lim A k (Im [p ]) = 0 k→∞ k→∞ Therefore lim A k p = lim λk p = 0 k→∞ k→∞ which implies λ < 1. -90- CHAPTER 24 Solution 24.1 Since A T (k)A (k) = a 22 0 0 a 21 it is clear that λ 1/2 max (k) = max [ a 1 (k), a 2 (k) ] Thus Corollary 24.3 states that the state equation is uniformly stable if there exists a constant γ such that k Π max [ a 1 (i), i =j a 2 (i) ] ≤γ (#) for all k, j with k ≥ j. (Note that this condition holds if max [ a 1 (k), a 2 (k) ] ≤1 for all but a finite number of values of k.) Of course the condition (#) is not necessary. Consider x (k +1) = 0 1/ 9 x (k) 4 0 The eigenvalues are ± 2/ 3, so the state equation is uniformly stable, but clearly (#) fails. Solution 24.5 Following the hint, set r (ko ) = 0 and k−1 r (k) = Σ ν( j)φ( j) , k ≥ ko +1 j =ko and write the given inequality as φ(k) ≤ ψ(k) + η(k)r (k) , k ≥ ko +1 Then, using nonnegativity of ν(k), r(k +1) = r(k) + ν(k)φ(k) ≤ [1+ν(k)η(k)]r (k) + ν(k)ψ(k) , k ≥ ko +1 Since 1+η(k)ν(k) ≥ 1, k ≥ ko , -91- (*) Linear System Theory, 2/E k r (k +1) Π j =ko Solutions Manual k−1 1 __________ ≤ r (k) Π j =ko 1+η( j)ν( j) k 1 __________ + ν(k)ψ(k) Π j =ko 1+η( j)ν( j) 1 __________ , k ≥ ko +1 1+η( j)ν( j) Iterating this inequality gives r (k) ≤ k−1 Σ j =k k−1 ν( j)ψ( j) Π i =j +1 o [ 1+η(i)ν(i) ] , k ≥ ko +1 and substituting this into (*) yields the result. Solution 24.7 By assumption ΦA (k, j) ≤ γ for k ≥ j. Treating f (k, z (k)) as an input, the complete solution formula is z (k) = ΦA (k, ko )z (ko ) + k−1 Σ ΦA (k, j +1)f ( j, z ( j)) , k ≥ ko +1 j =ko This gives z (k) ≤ γ z (ko ) + k−1 Σ γ f ( j, z ( j)) j =ko ≤ γ z (ko ) + k−1 Σ γ α j z ( j) , k ≥ ko +1 j =ko Applying Lemma 24.5, z (k) ≤ γ z (ko ) exp k−1 [γ Σ αj ] j =ko ≤ γ z (ko ) exp [ γ ∞ Σ j =k αj ] o ≤ γ e γ α z (ko ) , k ≥ ko This implies uniform stability. For the scalar example A (k) = 1/ 2 , f (k, z (k)) = 0, k ≥ 0 , z (k), k < 0 αk = 0, k ≥ 0 , 1, k < 0 we have ∞ Σ αj = j =k 0, k ≥ 0 k , k < 0 which is bounded for each k. But for ko < 0, the solution of this state equation yields z (0) = (3/ 2) −ko zo = (3/ 2) ko zo Clearly any candidate bound γ can be violated by choosing ko sufficiently large, so the state equation is not uniformly stable. -92- CHAPTER 25 Solution 25.1 If M (ko , k f ) is not invertible, then there exists a nonzero, n × 1 vector xa such that 0 = x Ta M (ko , k f )xa k f −1 = Σ j =k x Ta ΦT ( j, ko )C T ( j)C ( j)Φ( j, ko )xa o k f −1 = Σ C ( j)Φ( j, ko )xa 2 j =ko This implies C ( j)Φ( j, ko )xa = 0 , j = ko , . . . , k f −1 which shows that the nonzero initial state xa yields the same output on the interval as does the zero initial state. Therefore the state equation is not observable. On the other hand, for any initial state xo we can write, just as in the proof of Theorem 25.9, y (ko ) . . M (ko , k f )xo = O T (ko , k f ) . y (k f −1) If M (ko , k f ) is invertible, then the initial state is uniquely determined by y (ko ) . . xo = M −1 (ko , k f )O T (ko , k f ) . y (k f −1) Solution 25.2 In general the claim is false. If A (k) is zero, then k f −1 W (0, k f ) = Φ(k f , j +1)b ( j)b T ( j)ΦT (k f , j +1) Σ j =0 = b (k f −1)b T (k f −1) This W (0, k f ) has rank at most 1, and if n ≥ 2 the state equation is not reachable on [0, k f ]. -93- Linear System Theory, 2/E Solutions Manual The claim is true if A (k) is invertible at each k. Let k f = n so that n−1 Φ(n, j +1)b ( j)b T ( j)ΦT (n, j +1) Σ j =0 W (0, n) = Since Φ(n, j +1) is invertible for j = 0, . . . , n−1, let b (k) = Φ−1 (n, k +1)ek +1 , k = 0, . . . , n−1 where ek is the k th -column of In . Then n−1 W (0, n) = e j +1 e Tj +1 = In Σ j =0 and the state equation is reachable on [0, n ]. Solution 25.7 Suppose WO (ko , k f ) is invertible. Given a p × 1 vector y f , let u (k) = B T (k)ΦT (k f , k +1)C T (k f )W −1 O (ko , k f )y f , k = ko , . . . , k f −1 and let u (k) = 0 for other values of k. Then it is easy to show that the zero-state response to this input yields y (k f ) = y f . Thus the state equation is output reachable on [ko , k f ]. Conversely, suppose the state equation is output reachable on [ko , k f ]. If WO (ko , k f ) is not invertible, then there exists a nonzero p × 1 vector ya such that 0 = y Ta WO (ko , k f )ya k f −1 = Σ y Ta C (k f )ΦT (k f , j +1)B( j)B T ( j)ΦT (k f , j +1)C T (k f )ya j =ko k f −1 = Σ y Ta C (k f )Φ(k f , j +1)B( j)2 j =ko Therefore y Ta C (k f )Φ(k f , j +1)B( j) = 0 , j = ko , . . . , k f −1 But by output reachability, with y f = ya , there exists an input ua (k) such that k f −1 ya = Σ C (k f )Φ(k f , j +1)B ( j)ua ( j) j =ko Thus k f −1 y Ta ya = Σ y Ta C (k f )Φ(k f , j +1)B ( j)ua ( j) = 0 j =ko and this implies ya = 0. This contradiction shows that WO (ko , k f ) must be invertible. Note that if rank C (k f ) < p, then WO (ko , k f ) cannot be invertible, and the state equation cannot be output reachable. If m = p = 1, then k f −1 WO (ko , k f ) = Σ G 2 (k f , j) j =ko Thus the state equation is output reachable on [ko , k f ] if and only if G (k f , j) ≠ 0 for some j = ko , . . . , k f −1. -94- Linear System Theory, 2/E Solutions Manual Solution 25.13 We will prove that the state equation is reconstructible if and only if C CA . . . CA n−1 z = 0 implies A n z = 0 (*) That is, if and only if the null space of the observability matrix is contained in the null space of A n . First, suppose the state equation is not reconstructible. Then there exist n × 1 vectors xa and xb such that xa ≠ xb and C . . . C . . . CA n−1 xa = CA n−1 xb , A n xa ≠ A n xb That is C . . . CA n−1 (xa −xb ) = 0 , A n (xa −xb ) ≠ 0 Thus the condition (*) fails. Now suppose the condition (*) fails and z is such that C . . . CA n−1 z = 0 and A n z ≠ 0 Obviously z ≠ 0. Then for x (0) = z the zero-input response is y (k) = 0 , k = 0, . . . , n−1 (+) and x (n) ≠ 0. But the same output sequence is produced by x (0) = 0, and for this initial state x (n) = 0. Thus we cannot determine from the output (+) whether x (n) = z or x (n) = 0, which implies the state equation is not reconstructible. -95- CHAPTER 26 Solution 26.2 For the linear state equation x (k +1) = 1 k x (k) + 1 1 0 u (k) 1 easy computations give R 2 (k) = B (k) Φ(k +1, k)B (k−1) = 0 k 1 1 and R 3 (k) = B (k) Φ(k +1, k)B (k−1) Φ(k +1, k−1)B (k−2) = 0 k 2k−1 1 1 k From the respective ranks the state equation is 3-step reachable, but not 2-step reachable. Solution 26.4 The (n +1)-dimensional state equation z (k +1) = A 0 z (k) + c 0 b d u (k) y (k) = 01×n 1 z (k) − u (k) has the transfer function H (z) = 01×n 1 = 01×n 1 zI−A 0 −1 b −1 −c z d (zI−A) −1 0 z c (zI−A)−1 z −1 −1 b d −1 = z −1 c (zI−A)−1 b + z −1 d − 1 = z −1 G (z) − 1 Solution 26.6 By Theorem 26.8 G (z) is realizable if and only if it is a matrix of (real-coefficient) strictlyproper rational functions. By partial fraction expansion of G (z)/ z we can write G (z) in the form -96- Linear System Theory, 2/E Solutions Manual m G (z) = σl z _ _____ (z−λl )r Σ Σ Glr l =1 r =1 _ Here λ1 , . . . , λm are distinct complex numbers __ such that if λL is complex, then λM = λL for some M. Furthermore the p × m complex matrices satisfy GMr = GLr for r = 1, . . . , σL . From Table 1.10 the corresponding unit pulse response is m G (k) = σl Σ Σ Glr l =1 r =1 k k+1−r λ l−1 l (#) Thus we can state that a unit pulse response G (k) is realizable if and only if (a) there exist positive integers m, σ1 , . . . , σm , distinct complex numbers λ1 , . . . , λm , and σ1 + . . . +σm complex p × m matrices Glr such that (#) holds for all k ≥ 1, and _ __ (b) if λL is complex, then λM = λL for some M. Furthermore the p × m complex matrices satisfy GMr = GLr for r = 1, . . . , σL . Solution 26.8 Suppose the given state equation is minimal and of dimension n. We can write its (strictlyproper, rational) transfer function as . adj (zI−A) . b _c____________ G (z) = det (zI−A) where the polynomial det (zI−A) has degree n. If the numerator and denominator polynomials have a common root, then this root can be canceled without changing the inverse z transform of G (z). Therefore, following Example 26.10, we can write by inspection a dimension-(n−1) realization of the unit pulse response of the original state equation. This contradicts the assumed minimality, and the contradiction gives that the two polynomials cannot have a common root. Now suppose the polynomials det (zI−A) and c . adj (zI−A) . b have no common root, but that the given state equation is not minimal. Then there is a minimal realization z (k +1) = Fz (k) + gu (k) y (k) = hz (k) and we then have . adj (zI−F) . g . adj (zI−A) . b _h____________ _c____________ = det (zI−F) det (zI−A) where the polynomial det (zI−F) has degree no larger than n−1. This implies that the polynomials det (zI−A) and c . adj (zI−A) . b have a common root—a contradiction. Therefore the given state equation is minimal. Solution 26.11 This is essentially the same as Solution 11.12. Solution 26.12 Either by writing a minimal realization of G (z) in the form of Example 26.10 and computing cA k b, k = 0, . . . , 4, or by long division of G (z), it is easy to verify the first 5 Markov parameters. For the second part we can either work with an assumed transfer function, or assume a dimension-2 state equation of the form x (k +1) = 0 1 x (k) + −a 0 −a 1 y (k) = c 0 c 1 x (k) -97- 0 u (k) 1 Linear System Theory, 2/E Solutions Manual From the latter approach, setting cb = 0, cAb = 1, cA 2 b = 1/ 2, cA 3 b = 1/ 2 easily yields c 1 = 0, c 0 = 1, a 0 = 1/ 4, a 1 = 1/ 2. -98- CHAPTER 27 Solution 27.1 Similar to Solution 12.1. Solution 27.4 Suppose the entry Gij (z) has one pole at z = 1, that is Gij (z) = Nij (z) __________ (z−1)Dij (z) where all roots of the polynomial Dij (z) have magnitude less than unity (so Dij (1) ≠ 0), and the polynomial Nij (z) satisfies Nij (1) ≠ 0. Suppose that the m × 1 U (z) has all components zero except for U j (z) = z /(z−1). Then the i th -component of the output is given by Yi (z) = z Nij (z) ___________ (z−1)2 Dij (z) By partial fraction expansion yi (k) includes decaying exponential terms, possibly a constant term, and the term ij (1) _N _____ k , k ≥0 Dij (1) Since this term is unbounded, every realization of G (z) fails to be uniform bounded-input, bounded-output stable. Solution 27.7 The claim is not true in the time-varying case. Consider the scalar state equation x (k +1) = x (k) + δ(k)u (k) y (k) = x (k) where δ(k) is the unit pulse. The zero-state response to any input is y (k) = 0, k ≥ ko > 0 u (0), k ≥ ko +1, ko ≤ 0 Thus the state equation is uniform bounded-input, bounded-output stable with η = 1. However for ko = 0 and u (k) = (1/ 2)k we have u (k) → 0 as k → ∞, but y (k) = 1 for all k ≥ 1. For the time-invariant case the claim can be proved as follows. Assume u (k) → 0 as k → ∞. Given ε > 0 we will find a K such that y (k) ≤ ε, k ≥ K, which shows that y (k) → 0 as k → ∞. With k y (k) = Σ G (k−j)u ( j) j =0 and an input signal u (k) such that u (k) → 0 as k → ∞, let -99- Linear System Theory, 2/E Solutions Manual µ = sup u (k) , k≥0 η= ∞ Σ G (k) k =0 The first constant is finite for a well-defined sequence that goes to zero, and the second is finite by uniform bounded-input, bounded-output stability. Then there is a positive integer K 1 such that u (k) ≤ ∞ ε ___ , 2η Σ k =K G (k) ≤ 1 ε ___ , 2µ k ≥ K1 Let K = 2K 1 . Then for k ≥ K we have y (k) ≤ µ K 1 −1 Σ k G (k−j) + j =0 ≤µ k−K 1 k Σ G (q) + q =k−K 1 ≤µ ε ___ G (k−j) 2η kΣ =K 1 ε ε ___ ___ η=ε + 2η 2µ Solution 27.8 Similar to Solution 12.12. -100- ε ___ 2 η qΣ =0 G (q) CHAPTER 28 Solution 28.2 Lemma 16.18 gives that if V 11 and V are invertible, then V −1 = V 11 V 12 V 21 V 22 −1 = −1 −1 −1 −1 −1 V −1 11 +V 11 V 12 V a V 21 V 11 −V 11 V 12 V a −1 −1 −1 −V a V 21 V 11 Va −1 −1 where Va = V 22 −V 21 V −1 = I, written as 11 V 12 . From the expression VV V 11 V 12 V 21 V 22 W 11 W 12 W 21 W 22 =I we obtain V 11 W 11 + V 12 W 21 = I V 21 W 11 + V 22 W 21 = 0 Under the assumption that V 11 and V 22 are invertible these imply −1 W 11 = V −1 11 − V 11 V 12 W 21 , W 21 = −V −1 22 V 21 W 11 Solving for W 11 gives −1 W 11 = (V 11 −V 12 V −1 22 V 21 ) and comparing this with the 1,1-block of V −1 from Lemma 16.18 gives −1 −1 −1 −1 −1 −1 (V 11 −V 12 V −1 22 V 21 ) = V 11 + V 11 V 12 (V 22 −V 21 V 11 V 12 ) V 21 V 11 Solution 28.3 Given α > 1 consider z (k +1) = Âz (k) + B̂u (k) where  = α A and B̂ = α B. It is easy to see that reachability is preserved, and if we choose K such that z (k +1) = (Â+B̂K )z (k) = α (A+BK )z (k) is uniformly exponentially stable, then by Lemma 28.7 we have that x (k +1) = (A+BK )x (k) is uniformly exponentially stable with rate α. So choose, by Theorem 28.9, -101- Linear System Theory, 2/E Solutions Manual T n Σ k =0 T K = −B̂ ( )n k T T −1  B̂B̂ ( )k  n +1 That is, K = −α B T (α A T )n = −B T (A T )n n Σ k =0 (α A)k (α B)(α B)T (α A T )k −1 (α A)n +1 n Σ α−2(n−k) A k BB T (A T )k k =0 −1 A n +1 Solution 28.4 Similar to Solution 13.11. However for the time-invariant case the reachability matrix rank test can be used, rather than the eigenvector test, by writing B (A+BK)B (A+BK)2 B ... = B AB A 2 B ... I 0 0 . . . KB KAB+(KB)2 . . . ... I KB ... 0 I . . . . . . . . . Solution 28.6 Similar to Solution 2.8. Solution 28.8 Supposing that the linear state equation is reachable, there exists K such that all eigenvalues of A+BK have magnitude less than unity. Therefore (I−A−BK) is invertible, and if we suppose A−I B C 0 is invertible, then C (I−A−BK)−1 B is invertible from Exercise 28.6. Then given any diagonal, m × m matrix Λ, we can choose N = [C (I−A−BK)−1 B ]−1 Λ to obtain Ĝ(1) = Λ. For this closed-loop system, any x (0) and any constant input R (k) = ro yields lim y (k) = Λro k→∞ by the final value theorem. That is, the steady-state value of the response to constant inputs is ‘noninteracting.’ (For finite time values, or other inputs, interaction typically occurs.) -102- CHAPTER 29 Solution 29.1 The error eb (k) satisfies eb (k +1) = z (k +1) − Pb (k +1)x (k +1) = F̃(k)z (k) + [G̃b (k)C (k)−Pb (k +1)A (k)]x (k) + [G̃a (k)−Pb (k +1)B (k)]u (k) = F̃(k)z (k) − F̃(k)Pb (k)x (k) = F̃(k)eb (k) Therefore eb (k) → 0 exponentially as k → ∞. Now ∆ e (k) = x (k) − x̂(k) = x (k)−H (k)C (k)x (k)−J (k)z (k) = −J (k)eb (k) + [I−H (k)C (k)−J (k)Pb (k)]x (k) = −J (k)eb (k) Therefore if J (k) is bounded, that is, J (k) ≤ α < ∞ for all k, then eb (k) → 0 implies e (k) → 0, as k → ∞, and x̂(k) is an asymptotic estimate of x (k). Solution 29.2 The plant is xa (k +1) = x b (k +1) F 11 (k) F 12 (k) F 21 (k) F 22 (k) y (k) = Ip 0 xa (k) + x b (k) G 1 (k) u (k) G 2 (k) xa (k) x b (k) With Pb (k) = −H̃(k) In −p we have C (k) P b (k) −1 = Ip 0 −H̃ (k) In −p Then the equations in Exercise 29.1 give -103- −1 = Ip 0 H̃ (k) In −p Linear System Theory, 2/E Solutions Manual F̃(k) = F 22 (k) − H̃(k +1)F 12 (k) G̃b (k) = F̃(k)H̃(k) − H̃(k +1)F 11 (k) + F 21 (k) G̃a (k) = −H̃(k +1)G 1 (k) + G 2 (k) The observer estimate is x̂(k) = = = Ip y (k) + H̃(k) Ip H̃(k) I 0 0 z (k) I xa (k) + x b (k) 0 z (k) I xa (k) H̃(k)xa (k)+z (k) Therefore x̂a (k) = xa (k) x̂b (k) = H̃(k)xa (k) + z (k) where z (k +1) = F̃(k)z (k) + G̃a (k)u (k) + G̃b (k)y (k) This is exactly the same as the reduced-dimension observer in the text. Solution 29.5 Similar to Solution 15.6. Solution 29.6 Similar to Solution 15.10. -104-