AC&ST AUTOMATIC CONTROL AND SYSTEM THEORY TIME EVOLUTION OF DYNAMIC SYSTEMS Claudio Melchiorri Dipartimento di Ingegneria dell’Energia Elettrica e dell’Informazione (DEI) Università di Bologna Email: claudio.melchiorri@unibo.it C. Melchiorri (DEI) Automatic Control & System Theory 1 Y1 AC&ST Summary ! = Y2 = Y 1 R1 x + u ẋ!1 ẋ = =−− R 1 L 1 x1 +L1 u 1 L y y = =R1Rx11Lx1 PROBLEM: Computation of the time-evolution of the state x(t) of a dynamic system Σ u input 11 1 1 x + uu ẋ2 ẋ2= =− − x + 2 2 CRCR CR CR 2 2 22 y =x2x2 y y= ! ! output x state ẋ(t)ẋ(t)= x(t)x(t)= 1) Non-linear systems 2) Linear time-varying systems =f (x(t), f (x(t), u(t), u(t), t),t), = ?? ?? ẋ(t) = A(t)x(t) + B(t)u(t), " t ẋ(t) = A(t)x(t) + B(t)u(t), " x(t) = Φ(t, t0 )x0 + x(t) = Φ(t, t0 )x0 + 3) Linear time-invariant systems C. Melchiorri (DEI) x(t0)) = =x x0 x(t 0 0 t t0 t0 x(t0 ) = x0 x(t0 ) = x0 Φ(t, τ )B(τ )u(τ )dτ Φ(t, τ )B(τ )u(τ )dτ ẋ(t) = Ax(t) + Bu(t), x(0) = x0 Z t+ Bu(t), ẋ(t) = Ax(t) x(0) = x0 At "A(t t ⌧ ) Bu(⌧ )d⌧ x(t) = e x + e 0 ẋ(t)x(t)= =Ax(t) Bu(t), x(0) = x0 eAt x+ Bu(τ )dτ 00+ " Automatic Control & System Theory t0 2 AC&ST Systems in vector spaces We consider systems in input-state-output form, where x, y, z in general are elements of vector spaces with proper dimensions u input Σ y output x state Then, we have: 1. Single-Input Single-Output (SISO) systems if p=q=1 2. Multi-Input Multi-Output (MIMO) systems if p > 1, q > 1 C. Melchiorri (DEI) Automatic Control & System Theory 3 AC&ST Systems in vector spaces Mathematical models • Linear models Time invariant Time variant With A(t), B(t), C(t), D(t) Piecewise continuous • Non linear models Time invariant C. Melchiorri (DEI) Time variant Automatic Control & System Theory 4 AC&ST Systems in vector spaces • A linear time-invariant system is represented: • in the MIMO case by 4 matrices (A, B, C, D) • in the SISO case by (A, b, c, d). x(0) u B x f C y + + D u : input; y : output; f : forcing action; x : state A : system matrix B : input distribution matrix C : output distribution matrix D : algebraic input/output connection matrix C. Melchiorri (DEI) Automatic Control & System Theory 5 AC&ST Systems in vector spaces • Problem: computation of the time evolution of the state x(t) given • the initial conditions x0 • the input u(.) u t1 t2 t3 t trajectory t x(t) Set of the admissible velocities x(0) State space C. Melchiorri (DEI) Automatic Control & System Theory 6 AC&ST Example Two dof robot θ2 θ1 θi: τi: mi: ai: aci: Ii: g: Si, Ci: joint variable joint torque link’s mass link’s length centre of mass position inertia gravity acceleration sin(θi), cos(θi) Given the torques τ1 and τ2, Compute the time-evolution of θ1 and θ2 Not simple ! ! ! ! … Same problem as C. Melchiorri (DEI) Automatic Control & System Theory 7 AC&ST Non linear systems Theorem. The differential equation admits a unique solution x(t) if 1. 2. The function f(x, ‧) is piecewise continuous ∀ x ∈ Rn, ∀ t ≥ t0 The following Lipschitz condition holds∀ t ≥ t0 (not a discontinuity point for f(x, ‧)) and for any pair of vectors x1, x2 • Proof. Based on the the Peano-Picard successive approximations method Corollary: The solution C. Melchiorri (DEI) x(t) is a continuous function Automatic Control & System Theory 8 AC&ST Non linear systems We still have the problem of the computation of the solution of the differential equation: • Numerical methods • Analytical methods • Numerical integration of a function specified by samples f(t0), f(t1), …, f(tn) f • Rectangle rule (constant value in each interval) t • Trapezoidal rule (linear interpolation in each interval) f Z tn t0 f (T )dt ⇡ C. Melchiorri (DEI) f (t0 ) + 2 n X1 i=1 f (ti ) + f (tn ) ! h 2 Automatic Control & System Theory t 9 AC&ST Non linear systems • Simpson rule (quadratic interpolation in each interval) (n even => an odd number of samples is required) Function 1 0.8 h: 0.5 Rectangle rule: Trapezoidal rule: Simpson rule: 3.00236726015005 2.85542094707693 2.87929762575957 h: 0.25 Rectangle rule: Trapezoidal rule: Simpson rule: 2.94669069425499 2.87321753771844 2.87914973459894 h: 0.1 Rectangle rule: Trapezoidal rule: Simpson rule: 2.90758196912244 2.87819270650781 2.87914021733018 h: 0.0001 Rectangle rule: Trapezoidal rule: Simpson rule: 2.87916935623558 2.87913996697296 2.87913996792015 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0 1 2 C. Melchiorri (DEI) 3 4 5 6 Automatic Control & System Theory 10 AC&ST Non linear systems • Runge-Kutta methods By substituting the derivative with the incremental ratio related to the discrete time step h one obtains These are the first two elements of the Taylor series expression of function x(ti+1): The “Runge-Kutta methods” approximate the Taylor series expression by substituting the higher order derivative terms with a proper linear combination of the first order derivative computed in points internal to the time step C. Melchiorri (DEI) Automatic Control & System Theory 11 AC&ST Non linear systems The “Runge-Kutta methods” approximate the Taylor series expression by substituting the higher order derivative terms with a proper linear combination of the first order derivative computed in points internal to the time step As an example, the third-order Runge-Kutta method is based on the following relationship with C. Melchiorri (DEI) Automatic Control & System Theory 12 AC&ST Linear time-varying systems Let us consider a linear time-varying system: Since it is linear, the decomposition property holds for both the motion and response functions C. Melchiorri (DEI) Free motion Forced motion Free response Forced response Automatic Control & System Theory 13 AC&ST Linear time-varying systems • From Free motion Since function Forced motion ϕ( .) is linear in x0, it follows that where Φ(.,.) is the so-called State transition matrix. Obviously, this matrix must give the solution of the homogeneous differential equation C. Melchiorri (DEI) Automatic Control & System Theory 14 AC&ST Linear time-varying systems • Let us consider the homogeneous time-varying linear system x(t) is a n x 1 vector A(t) is a n x n matrix whose elements are piecewise continuous functions • Let us define the State transition matrix Φ(t, t0) as the (unique) solution of the matrix differential equation where X(t) is a real n x n matrix and I the identity n x n matrix • The columns of Φ(t,t0) are the n solutions obtained by assigning as initial conditions x(t0) = ei, being ei the i-th column of the identity matrix C. Melchiorri (DEI) Automatic Control & System Theory 15 AC&ST Linear time-varying systems Indeed, given: D.23 Let Φi(t,t0) (i=1, . . . , n) be the n solutions of (1) with initial conditions xi (t0) = ei , where ei is the i-th column of the identity matrix. The matrix Φ(t,t0) having the functions Φi (t,t0) as columns is called the state transition matrix of system (1). C. Melchiorri (DEI) Automatic Control & System Theory 16 AC&ST Linear time-varying systems The solution of (1) is: Properties of the state transition matrix The state transition matrix satisfies the following properties • Non singular: matrix Φ(t,t0) is non singular for each pair (t, t0) – this is a consequence of the (existence and) uniqueness of the solution of (1) • Inversion: Φ(t,t0) = Φ-1(t0,t) • Composition: Φ(t,t0) = Φ(t,t1) Φ(t1,t0) • Separation: Φ(t,t0) = Θ(t) Θ-1(t0) • Time evolution of the determinant: C. Melchiorri (DEI) Automatic Control & System Theory 17 AC&ST Linear time-varying systems Computation of the state transition matrix The transition matrix can be computed in two ways: • Peano-Baker succession • Runge-Kutta methods: based on the numerical integration of differential equations of the type with a specified integration step δt C. Melchiorri (DEI) Automatic Control & System Theory 18 AC&ST Matrix functions • Given a m x n matrix whose elements are functions of time • The time-derivative or the time-integral of the matrix A(t) are m x n matrices with elements the derivative or integral of the elements aij(t) Example: C. Melchiorri (DEI) Automatic Control & System Theory 19 AC&ST Linear time-varying systems • Let us consider the non homogeneous, time-varying linear system • Theorem: The solution of (2), given the input function u(t) piecewise continuous in [t0, t1] and initial conditions x0, t0, is Free motion C. Melchiorri (DEI) Forced motion Automatic Control & System Theory 20 AC&ST Linear time-varying systems • Proof (1): By deriving (3), and using the derivation rule of an integral function d dt Z ⇥(t) f (x, t)dx = f (⇥(t), t)⇥˙ (t) = = = = ⇥(t) (t) ⇤ f (x, t)dx ⇤t (= 0) one obtains ẋ(t) f ( (t), t) ˙ + Z Z t d (t, t0 ) d (t, ) x(t0 ) + (t, t)B(t)u(t) + B( )u( )d dt dt t0 Z t A(t) (t, t0 )x(t0 ) + B(t)u(t) + A(t) (t, )B( )u( )d A(t) | (t, t0 )x(t0 ) + t (t, )B( )u( )d t0 {z x(t) A(t)x(t) + B(t)u(t) C. Melchiorri (DEI) Z t0 Automatic Control & System Theory +B(t)u(t) } x(t) = ẋ(t) = x(t) = (t, t0 )x0 A(t)x(t) = A(t) (t, t0 )x0 dx(t) d (t, t0 ) (t, t0 )x0 =) = x0 dt dt 21 AC&ST Linear time-varying systems • Proof (2): Given a generic matrix X(t), by deriving the equality X-1(t) X(t) = I one obtains: Then, by derivation of Φ-1(t,t0) x(t): Since , and using (2) one obtains: and, by integration 1 (t, t0 )x(t) = c + Z t 1 (t, t0 )B( )u( )d t0 where c is a constant vector depending on the initial conditions. By exploiting the composition and inversion properties of Φ the proof is concluded. C. Melchiorri (DEI) Automatic Control & System Theory 22 AC&ST Linear time-varying systems • The previous equations are used for analysing MIMO time-varying linear systems x(0) u B(t) x f C(t) y + + D(t) • The response function, defining the output of the system, is C. Melchiorri (DEI) Automatic Control & System Theory 23 AC&ST Linear time-varying systems • The integral in (3) and (4) are convolution integrals with kernels δ(t): Dirac impulse at t = 0 These kernel functions are respectively known as • impulse state response of the system • impulse response of the system These functions define the effect at time t on the state or on the output of an impulse applied at time τ • The state x(t) is results a continuous function • If the system is not purely dynamic, i.e. if D(t) ≠ 0, the impulse affects directly the output, and therefore C. Melchiorri (DEI) y(t) is piecewise continuous Automatic Control & System Theory 24 AC&ST Dirac Impulse Dirac impulse Δ(τ,t0,t) This piecewise continuous function, if τ tends to zero, is a signal with infinite amplitude and unitary area. It is indicated as δ(t-t0) or δ(t) if t0 = 0 1/τ t0 t0 + τ t We also define Z t2 (t t1 t0 )dt := lim !0 Z t2 (⇥, t0 , t)dt = 1 t1 and, for any continuous function of time f(.) Z t2 f (t) (t t1 C. Melchiorri (DEI) t0 )dt := lim !0 Z t2 f (t) (⇥, t0 , t)dt = f (t0 ) t1 Automatic Control & System Theory 25 AC&ST Dirac Impulse Let us consider now a purely dynamic system, i.e. D(t)=0, with zero initial state. If a Dirac impulse is applied to the i-th input, and all the other inputs are null, then yi (t) = Z t W (t, ⇥ ) ei (t t0 )d⇥ = W (t, t0 ) ei i = 1, . . . , p t0 where ei is the i-th columns of the identity matrix Ip. Hence each single column of W(t, t0) represents the system response to a Dirac impulse applied at t0 to each single input. C. Melchiorri (DEI) Automatic Control & System Theory 26 AC&ST Dirac Impulse If a Dirac impulse is applied at the input, then for the following system one obtains x(t) = Φ(t, τ) In fact, since x(t) = Φ(t, t0) x(t0), it is possible to see that an impulsive input is equivalent (from the point of view of the state time-evolution) to an initial state (for continuous time systems). C. Melchiorri (DEI) Automatic Control & System Theory 27 AC&ST Dirac Impulse • Therefore: • In case of discrete-time systems, the Dirac impulse δ(k-h) is a signal equal to 1 for k = h, and to 0 otherwise. (k h) = ( 1 if k = h 0 otherwise • In case of discrete-time systems, an impulsive input is not equivalent to an initial state (it has an effect delayed by 1). C. Melchiorri (DEI) Automatic Control & System Theory 28 AC&ST Linear time-invariant (LTI) systems Let us consider time-invariant system, expressed in the continuous time domain as ( x(t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t) and in the discrete time one as ( C. Melchiorri (DEI) x(k + 1) = Ad x(k) + Bd u(k) y(k) = Cd x(k) + Dd u(k) Automatic Control & System Theory 29 AC&ST Linear time-invariant homogeneous systems • The transition matrix in this case is the exponential of a matrix, i.e. It may be computed through a power series expansion (from the Peano-Baker succession with a constant matrix A) and converges in norm for any t. As a matter of fact, given (and then || Ai || ≤ mi), one obtains C. Melchiorri (DEI) Automatic Control & System Theory m = || A || 30 AC&ST Linear time-invariant homogeneous systems • Notice that given a m x n matrix, then N.B.: this is true iff matrix A is diagonal while C. Melchiorri (DEI) Automatic Control & System Theory 31 AC&ST Why a matrix exponential for the solution? Let us consider a pair of differential equations In case of scalar equations, such as , the solution is given by a function u(t) = u0 ea t. It is well known that its behavior when t goes to infinity depends on the sign and value of a (or of its real part). We would like to have exponential solutions also for the matrix case, i.e. If this is the case, by substitution in the differential equation one obtains C. Melchiorri (DEI) Automatic Control & System Theory 32 AC&ST Eigenvalues and eigenvectors We have seen that starting from differential equations we have defined, for their solution, a matrix algebraic equation In order to solve this problem, it is therefore necessary that: 1. Vector x belongs to the null space of (λ I - A) 2. Parameter λ must be chosen so that (λ I - A) DOES PRESENT a null space! Therefore, parameter λ must satisfy det(λ I - A) = 0 (λ I - A) x = 0 C. Melchiorri (DEI) Automatic Control & System Theory eigenvalue eigenvector 33 AC&ST Eigenvalues and eigenvectors Therefore, the solution to the original problem is given by functions such as where c1 and c2 are proper constants to be determined on the basis of the initial conditions. In this case we have c1 c2 = 3 1 It is possible to verify that C. Melchiorri (DEI) Automatic Control & System Theory 34 AC&ST Eigenvalues and eigenvectors Given a n x n (real or complex) matrix A, let us consider the equation: (1) This equation admits nontrivial solutions x ≠ 0 if and only if (λ I - A) is singular, that is: (2) • The left-hand side member is a polynomial q(λ) with order n called characteristic polynomial of A. If A is real, its coefficients are real as well • Eq. (2) is called characteristic equation of A and admits n roots λ1, …, λn , in general complex, called eigenvalues or characteristic values of A • If A is real, the complex eigenvalues are conjugate in pairs. • The set σ(A) = ⎨λ1, …, λn⎬ of all the eigenvalues of A is called the spectrum of A • Cayley-Hamilton Theorem: every matrix verifies its characteristic equation, i.e. q(A) = 0 C. Melchiorri (DEI) Automatic Control & System Theory 35 AC&ST Eigenvalues and eigenvectors • Example: Given the matrix C. Melchiorri (DEI) Automatic Control & System Theory 36 AC&ST Eigenvalues and eigenvectors • For every eigenvalue λi, at least a corresponding vector xi (complex or real) exists such that eq. (1) is satisfied. This vector is called eigenvector or characteristic vector of A. • If xi is an eigenvector, also α xi is an eigenvector. Therefore, it is possible to use normalized eigenvectors (with unitary norm, i.e. choosing α = 1/||xi||) . • If A is real, eigenvectors corresponding to conjugate complex eigenvalues are conjugate complex as well. • If the eigenvalues of matrix A are distinct, the corresponding eigenvectors are linearly independent. C. Melchiorri (DEI) Automatic Control & System Theory 37 AC&ST Eigenvalues and eigenvectors • Example Impossibile visualizzare l'immagine. La memoria del computer potrebbe essere insufficiente per aprire l'immagine oppure l'immagine potrebbe essere danneggiata. Riavviare il computer e aprire di nuovo il file. Se viene visualizzata di nuovo la x rossa, potrebbe essere necessario eliminare l'immagine e inserirla di nuovo. 3 Impossibile visualizzare l'immagine. La memoria del computer potrebbe essere insufficiente per aprire l'immagine oppure l'immagine potrebbe essere danneggiata. Riavviare il computer e aprire di nuovo il file. Se viene visualizzata di nuovo la x rossa, potrebbe essere necessario eliminare l'immagine e inserirla di nuovo. 2 1 Impossibile visualizzare l'immagine. La memoria del computer potrebbe essere insufficiente per aprire l'immagine oppure l'immagine potrebbe essere danneggiata. Riavviare il computer e aprire di nuovo il file. Se viene visualizzata di nuovo la x rossa, potrebbe essere necessario eliminare l'immagine e inserirla di nuovo. 0 -1 Impossibile visualizzare l'immagine. La memoria del computer potrebbe essere insufficiente per aprire l'immagine oppure l'immagine potrebbe essere danneggiata. Riavviare il computer e aprire di nuovo il file. Se viene visualizzata di nuovo la x rossa, potrebbe essere necessario eliminare l'immagine e inserirla di nuovo. -2 -3 -3 C. Melchiorri (DEI) -2 Automatic Control & System Theory -1 0 1 2 3 38 AC&ST Eigenvalues and eigenvectors • Example [V, D] = eig(A) C. Melchiorri (DEI) Automatic Control & System Theory 39 AC&ST Eigenvalues and eigenvectors • sempio [V, D] = eig(A) Being v = [0, 1]T an eigenvector, this “direction” is not modified by the multiplication by A 3 2 1 0 -1 -2 -3 C. Melchiorri (DEI) Automatic Control & System Theory -3 -2 -1 0 1 2 3 40 AC&ST Similar matrices • Two matrices A and B are similar if there exists a non-singular matrix T such that • Similar matrices have the same eigenvalues. In fact, from B = T-1 A T with T non singular, it follows that Therefore det(λ I - A) = 0 iff det(λ I – B) = 0 • The similarity property allows to solve in a simple manner a LTI homogeneous system As a matter of fact, by defining x = T z, z = T-1 x, we have: C. Melchiorri (DEI) Automatic Control & System Theory 41 AC&ST Similar matrices • An important case is the transformation matrix T that transforms a generic square matrix A in Jordan form, very convenient for the analysis of dynamic systems. • In fact, given a generic matrix A, if it is possible to determine a `similar’ matrix B such that the corresponding matrix exponential eB t is simple to be computed, then one obtains • If A is symmetric, the problem of finding B is quite easy since A has n linearly independent eigenvectors, and from the equation Eigenvectors by defining T = [t1, …, tn] one obtains A T = T Λ , being Λ a diagonal matrix with the eigenvalues on the diagonal. Therefore The elements of the exponential of a real symmetric matrix are a linear combination of the exponential of its eigenvalues (that are real numbers). C. Melchiorri (DEI) Automatic Control & System Theory 42 AC&ST Similar matrices - Example Given the matrix: Then: and In this way it is simple to compute the solution z(t) from z0, and then the solution x(t) of the original problem (being z = T-1 x , then z0 = T-1 x0) C. Melchiorri (DEI) Automatic Control & System Theory 43 AC&ST Similar matrices In practice, given the system: ẋ(t) = Ax(t) If it is possible to determine a diagonal matrix B similar to A: It results also: A = T B T-1 Then: ẋ(t) 1 T = Ax(t) = T BT ẋ(t) = BT ż(t) = B z(t) z(t) = eBt z0 x(t) = eBt T 1 x0 x(t) = T eBt T 1 and T C. Melchiorri (DEI) B = T-1 A T 1 1 1 x(t) x(t) Automatic Control & System Theory x0 44 AC&ST Complex Jordan form General case. A generic n x n real matrix A has, in general, m complex eigenvalues: λ1, …, λm, m ≤ n. The complex Jordan form is: Block diagonal matrix Where Ji,j, the Jordan block corresponding to the eigenvalue λi, is defined as • • • C. Melchiorri (DEI) Counting multiplicity, the eigenvalues of J, therefore A, are the diagonal entries. Given an eigenvalue λi, its geometric multiplicity is the dimension of Ker(A − λi I), and it is the number of Jordan blocks corresponding to λi. The sum of the sizes of all Jordan blocks corresponding to an eigenvalue λi is its algebraic multiplicity. Automatic Control & System Theory 45 AC&ST Real Jordan form • The complex Jordan form can be expressed by real numbers, considering that complex elements in the blocks and in T are associated with conjugate complex elements. • Let us define: • λ1, λ2, … λh the real eigenvalues of A • µ1, µ2, …, µh their multiplicity • σ1 ± j ω1 , σ2 ± j ω2 , …, σk ± j ωk the complex eigenvalues of A • ν1, ν2, … νk their multiplicity it results n = µ1+µ2 + … + µh + 2 (ν1 + ν2 + … + νk) • Real Jordan form: R1, … Rn are (real) blocks corresponding to real eigenvalues. C1, …Ck are real blocks, each one corresponding to a pair of complex conjugate blocks of the complex Jordan form. C. Melchiorri (DEI) Automatic Control & System Theory 46 AC&ST Real Jordan form • Blocks corresponding to real eigenvalues λi: N.B.: C. Melchiorri (DEI) Automatic Control & System Theory 47 AC&ST Real Jordan form • Blocks corresponding to conjugate complex eigenvalues σi ± j ωi : C. Melchiorri (DEI) Automatic Control & System Theory 48 AC&ST Real Jordan form Or, equivalently C. Melchiorri (DEI) Automatic Control & System Theory 49 AC&ST Real Jordan form In conclusion, in the expression of C. Melchiorri (DEI) eA t one can find terms such as: Automatic Control & System Theory 50 AC&ST Solution of the differential equation trajectory t x(t) Set of the admissible velocities x(0) State space Summarizing, for the computation of the time-evolution of the state x(t): • Non-linear systems: in general: numerical methods • Linear time-varying systems: Φ(t, t0) (Peano-Baker succession) • Linear time-invariant systems: eA t C. Melchiorri (DEI) Automatic Control & System Theory (Jordan form) 51 AC&ST Example 1 Problem. Determine the solution x(t) of the dynamic system described by • Computation of the eigenvalues The eigenvalues λ1 and λ2 of matrix A are computed by equating to zero the determinant of matrix (λ I – A): In this case: and then This results could also have been obtained by noticing that matrix A is lower triangular, and therefore its eigenvalues are the elements of the diagonal. C. Melchiorri (DEI) Automatic Control & System Theory 52 AC&ST Example 1 • Computation of the eigenvectors Let t be the generic eigenvector associated to the eigenvalue λ. The following equation must hold Then: • if λ = -1 • if λ = -2 C. Melchiorri (DEI) Automatic Control & System Theory 53 AC&ST Example 1 • Therefore Check: NB: it is not necessary to compute matrix B in this manner… (the eigenvalues are already known!) • From x = T z one obtains z = T-1 x and therefore the initial state is C. Melchiorri (DEI) Automatic Control & System Theory 54 AC&ST Example 1 • The solution of the new system (in z) is given by Then: • Simulation in Matlab/Simulink Step x x' = Ax+Bu y = Cx+Du y State-Space To Workspace T t Clock To Workspace1 x' = Ax+Bu y = Cx+Du State-Space1 z K*u Matrix Gain y1 To Workspace2 z C. Melchiorri (DEI) Automatic Control & System Theory To Workspace3 55 AC&ST Example 1 Plot of z Plot of x 2 1.2 1 1.5 0.8 1 0.6 0.5 0.4 0 0.2 -0.5 0 -0.2 0 2 4 Time 6 8 10 (sec) -1 0 2 4 Time (sec) 6 1.2 Time evolution of [x1; x2] 8 10 Plot of T z 1 0.8 Time evolution of [z1; z2] 0.6 0.4 0.2 Time evolution of T*[z1; z2] 0 -0.2 0 2 4 Time C. Melchiorri (DEI) Automatic Control & System Theory 6 (sec) 8 10 56 AC&ST Example 2 Problem. Determine the solution x(t) of the dynamic system described by • Computation of the eigenvalues The eigenvalues λ1, λ2 and λ3 of matrix A are computed by equating to zero the determinant of matrix (λ I – A): In this case: and then C. Melchiorri (DEI) Automatic Control & System Theory 57 AC&ST Example 2 • Computation of the eigenvectors Let t be the generic eigenvector associated to the eigenvalue λ., then (λΙ -Α) t = 0 and therefore: • If λ = -1 • If λ = -2 • If λ = -3 C. Melchiorri (DEI) Automatic Control & System Theory 58 AC&ST Example 2 Therefore Check: From z = T-1 x the initial state results C. Melchiorri (DEI) Automatic Control & System Theory 59 AC&ST Example 2 • Therefore, the solution of the new system (in z) is given by Then: • Simulation in Matlab/Simulink Step t Clock To Workspace1 x' = Ax+Bu y = Cx+Du y State-Space To Workspace x' = Ax+Bu y = Cx+Du State-Space1 K*u Matrix Gain y1 To Workspace2 z To Workspace3 C. Melchiorri (DEI) Automatic Control & System Theory 60 AC&ST Example 2 Plot of x 1.4 1.5 1.2 1 1 0.5 0.8 0 0.6 -0.5 0.4 -1 0.2 -1.5 0 -2 -0.2 -2.5 -0.4 Plot of z 2 0 2 4 Time 6 (sec) 8 10 -3 0 2 4 6 Time (sec) Plot of T z 2 4 Time 1.4 8 10 1.2 Time evolution of [x1; x2 ; x3] 1 Time evolution of [z1; z2 ; z3] 0.8 0.6 0.4 0.2 0 Time evolution of T*[z1; z2 ; z3] -0.2 -0.4 0 C. Melchiorri (DEI) Automatic Control & System Theory 6 (sec) 8 10 61 AC&ST Discrete time systems • Let consider the time varying linear discrete time homogeneous system • As in the continuous time case, let define the state transition matrix Φ(k,h) as the matrix whose columns are the n solutions corresponding to the initial conditions x(h) = ei, being ei the i-th column of the identity matrix. The transition matrix satisfies • For a discrete time system, the transition matrix can be singular (in the continuous time case, it is always non singular). C. Melchiorri (DEI) Automatic Control & System Theory 62 AC&ST Discrete time systems For time-invariant systems, eq. (1) becomes and the transition matrix is simply In fact C. Melchiorri (DEI) Automatic Control & System Theory 63 AC&ST Discrete time systems If time-invariant non homogeneous linear systems are considered One obtains The proof is straightforward (by direct substitution). Then, the output of the system results C. Melchiorri (DEI) Automatic Control & System Theory 64 AC&ST Discrete time systems The impulse state response of the system and the impulse response of the system are C. Melchiorri (DEI) Automatic Control & System Theory 65 AC&ST Evolution of dynamic systems In conclusion, considering the Jordan (diagonal) form of the state matrix: Distinct eigenvalues Continous-time case Discrete-time case The system evolution is given by linearly independent terms since the eigenvalues λ1, …, λn are distinct. eλ1 t, …, eλn t λ1k, …, λnk C. Melchiorri (DEI) Automatic Control & System Theory 66 AC&ST Evolution of dynamic systems Multiple eigenvalues – Continuous time systems e At 2 6 6 =6 4 e J1 t 0 0 eJ2 t 0 0 .. . ... ... 0 0 0 eJn t Each block has an upper-triangular form 3 7 7 7 5 λi eigenvalue µi multiplicity The evolution depends on functions: C. Melchiorri (DEI) Automatic Control & System Theory 67 AC&ST Evolution of dynamic systems Multiple eigenvalues – Discrete time systems 2 J1k 6 0 6 Akd = 6 4 0 J2k 0 0 .. . ... ... 0 3 0 0 7 7 7 5 Jnk λi eigenvalue µi multiplicity The evolution depends on functions: C. Melchiorri (DEI) Automatic Control & System Theory 68 AC&ST Stability • Definition: The homogeneous system is asymptotically stable if • Theorem: System (1) is asymptotically stable if and only if every eigenvalues of A has negative real part • Proof. Based on the consideration that λ = ρ, or λ = σ + j ω and then NB: also the factors tn eλ t tend to 0 if t -> ∞ if the real part of λ is negative C. Melchiorri (DEI) Automatic Control & System Theory 69 AC&ST Stability • Definition: The homogeneous system is asymptotically stable if • Theorem: System (2) is asymptotically stable if and only if every eigenvalues of A has modulus smaller than 1 • Proof. Based on the consideration that λ = ρ, or λ = ρ ej ϕ and then C. Melchiorri (DEI) Automatic Control & System Theory 70 AC&ST Appendix Vector spaces & Matrices • Main definitions • Some geometric properties C. Melchiorri (DEI) Automatic Control & System Theory 71 AC&ST Some geometric properties • • • • e3 v1 {e1, e2, e3}: principal basis (columns of identity matrix I) V: subspace (set of vectors with given properties) v1, v2: basis of the subspace V V = [v1. v2]: base matrix of the subspace V Given the n x n matrix A, the subspace V is said invariant in A if v2 e2 The sum and the intersection of two invariants is an invariant. V e1 e3 Change of basis • Let’s consider, instead of {e1, e2, e3}, a new basis {h1, h2, h3} with matrix T = [h1, h2, h3] non singular. p h2 h3 AV⊆V • x = T z, e2 e1 h1 C. Melchiorri (DEI) If vector x contains the coordinates of point p in the main basis, and z its coordinates in the new one, then • z = T-1 x In the new basis, the n x n matrix A corresponds to A1 = T-1 A T Automatic Control & System Theory 72 AC&ST Some geometric properties • Given a m x n matrix A, the following subspaces can be defined im {A} = {y : y = A x, x ∈ Rn} column space of A ker{A} = {x : 0 = A x} C. Melchiorri (DEI) kernel (null) space of A Automatic Control & System Theory 73 AC&ST Pseudoinverse of a matrix • Let consider a linear system defined by Ax=b Necessary and Sufficient Condition for the existence of at least a solution is that b ∈ im A (1) • If (1) is verified, given x0 a particular solution, the set of possible solutions is x = x0 + ker A or, in parametric form x = x0 + K α where • K is a basis matrix of ker{A} and • α an arbitrary vector ∈ Rq , with q = dim(ker{A}) C. Melchiorri (DEI) Automatic Control & System Theory 74 AC&ST Pseudoinverse of a matrix • If (1) is not verified, it is possible to compute the value of x so that the error (in norm) is minimized || A x1 – b||2 minimum If A is a square and non singular matrix, then im{A} = Rn, ker{A} = {0} and the solution, if exists, is unique and given by x = A-1 b A Rn Rn A-1 C. Melchiorri (DEI) Automatic Control & System Theory im A 75 AC&ST Pseudoinverse of a matrix • If A is not square or it is singular, then its inverse does not exist and it is necessary to use the pseudoinverse A+ • Given the m x n matrix A, there are two possibilities: 1) m < n; 2) m > n rank A = min(m,n) = m ➝ im(A) = Rm ∀ b ∈ im(A) ∃ x s.t. b = A x (more than one!) x = A+ b ∃ ker(A) s.t. ∀ x ∈ ker(A) ➝ y = A x = 0 ➝ x = A+ b + xn ➝ b = A(A+ b + xn) = b, ∀ xn ∈ ker(A) ➝ x = A+ b + (I – A+ A) α general expression of the solution x = A+ b has minimun norm 1) m < n A Rn 0 ker A C. Melchiorri (DEI) A+ Automatic Control & System Theory Rm im {A} 76 AC&ST Pseudoinverse of a matrix • If A is not square or it is singular, then its inverse does not exist and it is necessary to use the pseudoinverse A+ • Given the m x n matrix A, there are two possibilities: 1) m < n; 2) m > n 2) m > n rank A = min(m,n) = n ∀ x ∃ ! b s.t. b = A x ∀ b ∈ im(A) ∃ ! x s.t. b = A x (x = A+ b) if b ∉ im (A) ➝ ∄ x s.t. b = A x but if b0 ∉ im (A) ➝ ∃ x0 = A+ b0 ➝ b = A x0 = A A+ b0 ≠ b0 || b – b0 || is minimum (A A+ ≠ I) A Rn im{A} Rm A+ C. Melchiorri (DEI) Automatic Control & System Theory 77 AC&ST Pseudoinverse of a matrix x0 ker A b ker AT x x0 Rn im A im AT C. Melchiorri (DEI) Automatic Control & System Theory Rm 78 AC&ST Vector and matrix norms • Norm: generalization of the concept of distance (length) • Vector norm: the norm ||.|| of a vector x ∈ Rn is a function Rn ➝ R such that: Positive homogeneity Triangle inequality • Common norms in Rn : 1-norm Euclidean norm 2-norm p-norm Also called Sup-norm gives the peak value C. Melchiorri (DEI) infinity-norm Automatic Control & System Theory 79 AC&ST Vector and matrix norms • Example: given x = [1, -2, 2]T ➝ ||x||1 = 5, ||x||2 = 3, ||x||1 = 2. • Lemma: Let ||x||a and ||x||b be two norms of x ∈ Rn. There exists an infinite number of positive scalar k1, k2 such that The norms ||.||a and ||.||b are said to be equivalent. This property holds between any pair of norms in Rn. • Example: • Given x = [1, -2, 2]T, then: C. Melchiorri (DEI) Automatic Control & System Theory 80 AC&ST Vector and matrix norms • A vector x can be multiplied by matrix A: y = A x. In order to relate the `dimension’ of y and of x, the matrix norm is defined as follows. • Matrix norm (induced). Let ||x|| be a norm of x ∈ Rn. The matrix A ∈ Rn x n has the norm induced by ||.|| and defined as: ⇥A⇥ = ⇥ A⇥ = sup ⇥Ax⇥, x kxk=1 | | ⇥A⇥, • It follows that: C. Melchiorri (DEI) Automatic Control & System Theory 81 AC&ST Vector and matrix norms • Some matrix norms: row sum column sum Frobenius norm If C. Melchiorri (DEI) Automatic Control & System Theory 82 AC&ST Eigenvalues, Eigenvectors, Eigenspaces Let A be a n x n square matrix in R. Definition The characteristic polynomial of A is the polynomial of order n Definition The characteristic equation of A is the equation Definition The n solutions C. Melchiorri (DEI) of the characteristic equation are the eigenvalues of A Automatic Control & System Theory 83 AC&ST Eigenvalues, Eigenvectors, Eigenspaces Definitions Given an n×n matrix A and an eigenvalue λi of this matrix, there are two indices measuring, roughly speaking, the number of eigenvectors belonging to λi: • The algebraic multiplicity of an eigenvalue is defined as the multiplicity of the corresponding root of the characteristic polynomial. • The geometric multiplicity of an eigenvalue is defined as the dimension of the associated eigenspace, i.e. number of linearly independent eigenvectors with that eigenvalue. • Both algebraic and geometric multiplicity are integers between (including) 1 and n. • The algebraic multiplicity ni and geometric multiplicity mi may or may not be equal, but we always have mi ≤ ni. The simplest case is of course when mi = ni = 1. • The total number of linearly independent eigenvectors, Nx, is given by summing the geometric multiplicities N X mi = N x i=1 C. Melchiorri (DEI) Automatic Control & System Theory 84