Control theory Kim Mathiassen 15.02.2011 Control theory Mass spring damper system Modeling Open loop vs. closed loop Second order system Stability PID control P - Proportional I - Integral D - Derivative Optimal control LQR 15.02.2011 2 Mass spring damper system From Wikimedia Commons x = displacement [m] f = force applied [kg · m/s 2 ] 15.02.2011 m = mass of the block [kg ] B = damping constant [kg /s] k = spring constant [kg /s 2 ] 3 Mass spring damper system Using Newton’s second law P I Spring force: f1 = −kx I Damping force: f2 = −f I External force: f3 = u fi = ma. We have three forces δx δt = −f ẋ This gives the equation mẍ = −kx − f ẋ + u Differential equation for mass spring damper system ẍ + 15.02.2011 f m ẋ + k mx = 1 mu 4 Modeling domains Frequency domain (Transfer functions) x(s)=h(s)u(s) h(s)= 1 m f k s2+ m s+ m State space domain ẋ=Ax + Bu 15.02.2011 ẋ1 =x2 k ẋ2 =− m x1 − f m x2 + 1 mu 5 Block diagrams u - - 1 m ẋ2 x2 = ẋ1 x1 f k 15.02.2011 6 SISO and MIMO Single-Input Single-Output (SISO) The system has one input u and one output x Multiple-Input Multiple-Output (MIMO) The system has multiple input u and multiple output x Single-Input Multiple-Output (SIMO) Can be regarded as several SISO systems Multiple-Input Single-Output (MISO) Can be regarded as several SISO systems 15.02.2011 Process Process 7 Open loop vs. closed loop Open-loop r Controller u Process x Closed-loop r e Controller u Process x y 15.02.2011 Mesurements 8 Second order systems H(s) = s2 + 1 m f ms + k m = 1 m (s − λ1 )(s − λ2 ) Solution The generic solution gives three cases depending on pole placemend. The three cases are called under-damped, over-damped and critially damped ! r f km λ{1,2} = − 1± 1−4 2 (1) 2m f 15.02.2011 9 Second order systems Damping ratio ζ= −(λ1 +λ2 ) √ 2 λ1 λ2 Over-damped, ζ > 1 (λ1 and λ2 real and distinct) Slow system responce Critically damped, ζ = 1 (λ1 = λ2 ) Fastes system responce without oscillations Under-damped, ζ < 1 (λ1 and λ2 complex conjugates) Fast system responce, but with oscillations 15.02.2011 10 Second order system responce 15.02.2011 11 From Wikimedia Commons Stability Consider the system y (s) = h(s)y0 (s) where y0 (s) has finite length and amplitude Asymptotically stable The system is asymptotically stable if y → 0 when t → ∞ Marginally stable The system is marginally stable if |y | < ∞ for all t ≥ 0 Unstable If the system is not stable, it is unstable 15.02.2011 12 PID control We want to make the system stable and controllable with a controller. The PID controller is a simple controller that may acheive this goal. The PID controller is often analyzed in the frequency domain. PID controller Z u = Kp e + Ki 15.02.2011 e(τ )d τ + Kd ė 13 Proportional I A pure proportional controller will have a steady-state error I Adding a integration term will remove the bias I High gain (Kp ) will produce a fast system I High gain may cause oscillations and may make the system unstable I High gain reduces the steady-state error 15.02.2011 14 Proportional 15.02.2011 15 From Wikimedia Commons Integral I Removes steady-state error I Increasing Ki accelerates the controller I High Ki may give oscillations I Increasing Ki will increase the settling time 15.02.2011 16 Integral 15.02.2011 17 From Wikimedia Commons Derivative I Larger Kd decreases oscillations I Improves stability for low values of Kd I May be highly sensitive to noise if one takes the derivative of a noisy error I High noise leads to instability 15.02.2011 18 Derivative 15.02.2011 19 From Wikimedia Commons PIDstop From http://www.pidstop.com/demo PID games http://www.pidstop.com/demo 15.02.2011 (K1 = -110 K2 = 0.728) 20 Optimal control I Optimal controll is another control approach than PID I The idea is to specify a cost function and then find the optimal input I The Dynamics of the system is used to design the controller I For non-linear system it is not always possible to find the optimal solution I A special case is for linear systems with a quadradic cost function I The optimal controller must have all states as input I Most often used with an observer to estimate the states that are not measured 15.02.2011 21 Optimal control r u ê Controller Process x ŷ 15.02.2011 Observer y Mesurements 22 Linear-quadratic regulator (LQR) I The feedback is given as u = G 1 x + G 2 r I r is the reference function I The matrix G 1 and G 2 is found based on the system dynamics and the cost function using Pontryagin’s Maximum principle I When following a trajectory the function r (t) must be known for all future timesteps in order to find the optimal solution Cost function J= 15.02.2011 1 2 Z ∞ e T Qe + u T Pudt t 23 References J. B. Balchen, T. Andresen, and B. A. Foss. Reguleringsteknikk. Institutt for teknisk kybernetikk, 2004. PID controller. http://en.wikipedia.org/wiki/pid_controller, February 2011. Damping. http://en.wikipedia.org/wiki/damping, February 2011. O.A. Solheim and Norges tekniske høgskole Institutt for teknisk kybernetikk. Optimalregulering. Tapir, 1976. 15.02.2011 24