5/12/2016 Batch Calculation of the Residues and Their Sensitivities Or: How to compute almost any derivative using >>sum(prod([combnk(factors), dfactors.’])) Bradley T. Burchett Outline • Motivation - Brute force Control System Optimization • Sylvester’s Expansion • Partial Fraction Expansion - Batch Method • Numerical Examples • Derivatives of the Residues • Numerical Examples • Conclusions 1 5/12/2016 Motivation: Brute Force Optimization of Control Systems • Infinite horizon optimal linear feedback controllers are determined by minimizing the cost function • Subject to the constraints defined by the system state dynamics Motivation continued • In the case of State Feedback, there is a direct solution from the Algebraic Ricatti Equation • In the case of Output feedback, only iterative solutions are available, the most historic involving a pair of Lyapunov equations. • Linear dynamics always have a closed-form solution. • Idea: substitute the closed-loop system closed-form solution into the cost function to transform the optimization into an unconstrained optimization. • The reformulated output feedback problem is termed ‘brute force optimization of control systems’ 2 5/12/2016 A Plethora of methods • The closed-form solution of the state dynamics for any time is written in terms of the matrix exponential of At • Thus for every method of computing the matrix exponential, there exists a corresponding brute force optimization method • Investigations to date have used the following methods – Dyadic decomposition (Burchett, Costello 1997) – Pade Approximation (Burchett, Costello 2001) – Sylvester’s expansion (current unpublished work) Sylvester’s expansion • Sylvester’s expansion for systems with roots of multiplicity mk is written as: • nk is the combined numerator polynomial from the partial fraction expansion of: 3 5/12/2016 • Using Sylvester Expansion, the quadratic cost function can be re-written • Where • Φ(Α Α) is the minimum polynomial of A • The cost function in this form can be integrated closedform yielding (this requires integration by parts and invoking mathematical induction) • We would like to invoke gradient based methods with analytic derivatives if possible • This requires computing the derivatives of all factors in the equation above (including eigenvalues and residues). 4 5/12/2016 Partial Fraction Expansion • The partial fraction expansion of a system transfer function • Can be written • Adding the RHS of the previous eqn by finding a common denominator then equating powers of s in the numerators right and LHS of the previous eqn, a set of n linear equations emerges. This set of equations can be written in a matrix form Ξ a=H • H is a vector of coefficients from the convolved form of the original numerator polynomial 5 5/12/2016 • • Ξ has a distinct pattern in terms of row i and column j. In Matlab code, that is »Xi(i,j)=sum(prod(combnk(-poles([1:j-1, j+1:n]),i-1))); • For the distinct poles case: 1 m −1 ∑ − pj j=2 ∑ − p j (− pk ) jk = m2−1 jk ≠1 M m −1 − pj ∏ j=2 ( ) L 1 m −1 ∑−p L j j =1 j≠2 ∑ (− p )(− p ) j m−1 jk = 2 jk ≠2 k M L M m −1 ∏−p j =1 j≠2 j L m −2 ∑ −pj j =1 n1 η1 − ∑ − p j (− pk ) n2 η 2 m −1 = jk = M M 2 jk ≠m −1 n η m −1 m −1 M m−2 − p ∏ j j =1 1 ( ) 6 5/12/2016 • For systems with repeated poles, the PFE is: • By repeating the algebraic process outlined in above, we discover that the resulting system of equations Ξ a=H has the same form as Eq. 4, except that for the terms involving repeated poles, the corresponding column of the Ξ matrix can be built from the bottom up using Eq. 4, pretending that the system is lacking mk-j+1 occurrences of the repeated pole. • Or more specifically: 7 5/12/2016 Numerical Examples • First case: distinct poles: • The PFE is: • The matrix equation is • Where • The residues are a1,2=0.107±0.063j, a3,4=0.274±0.296j 8 5/12/2016 • Second case: Repeated poles • The PFE is: • The matrix equation is • The resdiues are a1,2=-0.25, a3,4±0.25j Derivatives of the Residues • Since we have constructed the residue calculation as a linear system a=Ξ Ξ-1H • The derivatives are found by application of the product rule: 9 5/12/2016 • Then using the matrix inversion fact • We obtain • Which is good news indeed--we do not require the inverse of the matrix derivative • For the distinct poles case, the matrix derivative follows from the original pattern n ∂p = ∑ − m ∏(− pk ) m =1 ∂K ∂ K k = n −2 ∂ Ξi , j m≠ j i −2 k≠m • In Matlab code, if »rows = combnk(-poles([1:j-1, j+1:n]),k) ; »column = fliplr(-dpoles(1:j-1, j+1:n)); • Then »dXi = sum(prod(combnk(rows(m,:),k-1), column(:),2)); 10 5/12/2016 • More explicitly: 0 n−1 ∂pj ∑− ∂K j=2 ∂p ∑ − j (− pk ) n −1 j ,k = P2 ∂ K j, k ≠1 M n ∂p m − ∏ (− pk ) ∑ m=1 ∂K k = n− 2 i− 2 m≠1 k≠m L 0 n−1 ∂pj ∑− ∂K L j =1 j≠2 ∂pj − (− pk ) n−1 ∂K j ,k = P2 ∑ L j ,k ≠ 2 M ∂p m − ∑ m =1 ∂K n m≠2 M ∏(− p ) n− 2 k = i −2 k≠m ∑− ∂K j =2 ∂p j − − p ( ) ∑n−1 k ∂K j, k = P2 j ,k ≠ n M n ∂p m − − pk ) ( ∑ ∏ m=1 ∂K k = n− 2 i −2 m ≠n k≠ m 0 n k L ∂pj Numerical Example • Considering the Frequency domain function • Assuming the pole derivatives are known as • The derivative matrix is 11 5/12/2016 • Where • The derivative of the H matrix is taken to be ∂H ∂K = [0 0 0 0 1] • The resulting residue derivatives are: 12 5/12/2016 Repeated Poles Example • Consider again the system • With pole derivatives • This leads to an enigma – when constructing the columns of the Ξ matrix corresponding to the repeated pole, we ignored one occurrence of the repeated pole – When constructing the derivative of the Ξ matirx, which pole derivative to I ignore? • I can show what the numbers should be, from finite differencing, but I cannot determine the correct pattern... 13 5/12/2016 • The derivative of the Ξ matrix is • The residue sensitivities are Conclusions • Batch calculation seems to be a viable method for finding derivatives of the residues • The important case of systems with repeated poles needs further investigation • Help ?! 14