Uploaded by liutianqi318

Compendium 2013

advertisement
EL2620 Nonlinear Control
Lecture notes
Karl Henrik Johansson, Bo Wahlberg and Elling W. Jacobsen
This revision December 2012
Automatic Control
KTH, Stockholm, Sweden
Preface
Many people have contributed to these lecture notes in nonlinear control.
Originally it was developed by Bo Bernhardsson and Karl Henrik Johansson,
and later revised by Bo Wahlberg and myself. Contributions and comments
by Mikael Johansson, Ola Markusson, Ragnar Wallin, Henning Schmidt,
Krister Jacobsson, Björn Johansson and Torbjörn Nordling are gratefully
acknowledged.
Elling W. Jacobsen
Stockholm, December 2012
Lecture 1
EL2620
• Practical information
• Course outline
• Linear vs Nonlinear Systems
• Nonlinear differential equations
Lecture 1
EL2620 Nonlinear Control
[email protected]
STEX (entrance floor, Osquldasv. 10), course material
[email protected]
Hanna Holmqvist, course administration
Farhad Farokhi, Martin Andreasson, teaching assistants
[email protected], [email protected]
[email protected]
Elling W. Jacobsen, lectures and course responsible
• Instructors
28h lectures, 28h exercises, 3 home-works
7.5 credits, lp 2
Automatic Control Lab, KTH
EL2620 Nonlinear Control
• Disposition
Lecture 1
EL2620
3
2012
1
2012
Course Goal
Today’s Goal
Lecture 1
• Derive equilibrium points
• Transform differential equations to first-order form
• Describe distinctive phenomena in nonlinear dynamic systems
• Define a nonlinear dynamic system
You should be able to
EL2620
Lecture 1
• use some practical nonlinear control design methods
• apply the most powerful nonlinear analysis methods
• understand common nonlinear control phenomena
You should after the course be able to
4
2012
2
2012
To provide participants with a solid theoretical foundation of nonlinear
control systems combined with a good engineering understanding
EL2620
Course Information
Course Outline
Lecture 1
• Summary (L14)
fuzzy control (L11-L13)
7
2012
5
2012
• Alternatives: gain scheduling, optimal control, neural networks,
methods (L7-L10)
• Control design: compensation, high-gain design, Lyapunov
functions (L3-L6)
• Feedback analysis: linearization, stability theory, describing
simulation (L1-L2)
• Introduction: nonlinear models and phenomena, computer
EL2620
Lecture 1
• Written 5h exam on January 18 2014
(we are strict on this!)
• Homeworks are compulsory and have to be handed in on time
Social)
• All info and handouts are available at course homepage (KTH
EL2620
Material
Software: Matlab / Simulink
Homeworks: 3 computer exercises to hand in
Exercises: Class room and home exercises
Lecture notes: Copies of transparencies (from previous year)
Linear Systems
superposition
scaling
0
Lecture 1
Notice the importance to have zero initial conditions
Y (s) = G(s)U (s)
8
2012
: M → M is
ẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t), x(0) = 0
Z t
g(τ )u(t − τ )dτ
y(t) = g(t) ⋆ u(t) =
Example: Linear time-invariant systems
S(αu) = αS(u)
S(u + v) = S(u) + S(v)
Definition: Let M be a signal space. The system S
linear if for all u, v ∈ M and α ∈ R
EL2620
Lecture 1
Two course compendia sold by STEX (also available at course homepage)
Only references to Khalil will be given.
6
2012
Alternative textbooks (decreasing mathematical rigour):
Sastry, Nonlinear Systems: Analysis, Stability and Control; Vidyasagar,
Nonlinear Systems Analysis; Slotine & Li, Applied Nonlinear Control; Glad &
Ljung, Reglerteori, flervariabla och olinjära metoder.
•
•
•
•
2002. Optional but highly recommended.
• Textbook: Khalil, Nonlinear Systems, Prentice Hall, 3rd ed.,
EL2620
Linear Systems Have Nice Properties
Linear model valid only if sin φ
≈φ
Unrealistic answer. Clearly outside linear region!
0.1
φ0 ≈
= 2 rad = 114◦
0.05
Lecture 1
Must consider nonlinear model. Possibly also include other
nonlinearities such as centripetal force, saturation, friction etc.
so that
= φ0 ) gives
t2
x(t) ≈ 10 φ0 ≈ 0.05φ0
2
Linear model (step response with φ
Can the ball move 0.1 meter in 0.1 seconds from steady state?
EL2620
Lecture 1
Frequency analysis possible Sinusoidal inputs give sinusoidal
outputs: Y (iω) = G(iω)U (iω)
Superposition Enough to know a step (or impulse) response
11
2012
9
2012
Local stability=global stability Stability if all eigenvalues of A (or
poles of G(s)) are in the left half-plane
EL2620
Lecture 1
2012
Linear Models are not Rich Enough
12
2012
10
mẍ(t) = mg sin φ(t), Linear model: ẍ(t) = gφ(t)
x
Linear models can not describe many phenomena seen in
nonlinear systems
EL2620
Lecture 1
Nonlinear model:
φ
Linear Models may be too Crude
Approximations
Example:Positioning of a ball on a beam
EL2620
Σ
1
s
Motor
−
1
x2
x2
1
+ f (1 − x1 )
− ǫf (x2 − xc )
0
0
50
100
150
200
0
10
15
50
50
100
time [s]
Output
100
Input
150
150
200
200
15
2012
0
0
5
0
0
10
2
0
0
4
0.2
0.4
5
5
5
10
10
10
15
Time t
15
Time t
15
Time t
20
20
20
25
25
25
S TEP R ESPONSES
30
30
30
2012
Stable Periodic Solutions
Motor:
1
s(1+5s)
Sum
K=5
G(s) =
Constant
0
P-controller
5
-1
Motor
5s 2+s
1
Backlash
Example: Position control of motor with back-lash
EL2620
Lecture 1
y
16
2012
14
(The linearized gain of the valve increases with increasing amplitude)
Stability depends on amplitude of the reference signal!
Lecture 1
0.7, ǫ = 0.4
−
20
y
r = 1.72
r = 1.68
r = 0.2
Lecture 1
=
f
x1 exp
1
(s+1)(s+1)
y
Controller:
=
ẋ2
−x1 exp
-1
2
u
1
(s+1)2
Process
= u2
EL2620
Existence of multiple stable equilibria for the same input gives
hysteresis effect
=
ẋ1
s
1
−1
Valve
Multiple Equilibria
Example: chemical reactor
EL2620
Lecture 1
Simulink block diagram:
r
Example: Control system with valve characteristic f (u)
13
2012
Stability Can Depend on Reference Signal
c
2
PSfr
EL2620
Coolant temp x
Temperature x
Output y
Output y
Output y
Back-lash induces an oscillation
0
10
10
10
20
20
20
Time t
Time t
Time t
30
30
30
Lecture 1
0
10
-2
10
0
10
-2
10
40
40
40
0
10
Frequency (Hz)
a=2
0
10
Frequency (Hz)
a=1
a sin t
0
-2
-1
0
0
1
2
-2
-1
0
1
2
Saturation
1
1
2
3
Time t
2
3
Time t
y(t) =
Harmonic Distortion
How predict and avoid oscillations?
-0.5
0
0
0.5
-0.5
0
0
0.5
-0.5
0
0.5
Example: Sinusoidal response of saturation
EL2620
Lecture 1
Output y
Output y
Output y
50
50
50
k=1
4
4
5
5
P∞
Period and amplitude independent of initial conditions:
EL2620
Amplitude y
Amplitude y
19
Ak sin(kt)
2012
17
2012
Automatic Tuning of PID Controllers
0
y
u
Relay
5
A
T
u
−1
Process
10
y
Time
=
Energy in tone k
Energy in tone 1
k=2
Lecture 1
Trade-off between effectivity and linearity
20
2012
Introduces spectrum leakage, which is a problem in cellular systems
Effective amplifiers work in nonlinear region
Example: Electrical amplifiers
Total Harmonic Distortion
P∞
Nonlinearities such as rectifiers, switched electronics, and
transformers give rise to harmonic distortion
Example: Electrical power distribution
EL2620
Lecture 1
−1
0
1
Σ
PID
18
2012
Relay induces a desired oscillation whose frequency and amplitude
are used to choose PID parameters
EL2620
0
10
10
Time t
15
15
Time t
Lecture 1
20
20
25
25
30
30
tf =
1
x0
1
0≤t<
x0
Integrate
⇒
ẋ = x2 ⇒
dx
= dt
x2
−1
−1
x0
−
= t ⇒ x(t) =
x(t) x(0)
1 − x0 t
Recall the trick:
Solution interval depends on initial condition!
Solution not defined for
x0
,
has solution x(t) =
1 − x0 t
= x2 , x(0) = x0
Existence Problems
5
5
+ ẏ + y − y 3 = a sin(ωt)
Subharmonics
Example: The differential equation ẋ
EL2620
Lecture 1
-1
-0.5
0
0.5
1
0
0
0.5
a sin ωt
-0.5
y
Example: Duffing’s equation ÿ
EL2620
23
2012
21
2012
Nonlinear Differential Equations
Lecture 1
0
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
2
Time t
3
Finite escape time of dx/dt = x2
4
Finite Escape Time
1
(1)
2012
5
24
2012
22
: [0, T ] → Rn such that (1)
ẋ = Ax, x(0) = x0 , gives x(t) = exp(At)x0
Simulation for various initial conditions x0
EL2620
Lecture 1
Example:
• When is the solution unique?
• When does there exists a solution?
over an interval [0, T ] is a C1 function x
is fulfilled.
ẋ(t) = f (x(t)), x(0) = x0
Definition: A solution to
EL2620
x(t)
ẋ =
0
1
2
Time t
3
4
5
Lipschitz Continuity
-1
-0.5
0
0.5
1
1.5
2
x, x(0) = 0, has many solutions:
(t − C)2 /4 t > C
x(t) =
0
t≤C
√
Uniqueness Problems
Lecture 1
kxk2 = x21 + · · · + x2n
Euclidean norm is given by
Slope L
kf (x) − f (y)k ≤ Lkx − yk
Definition: f : Rn → Rn is Lipschitz continuous if there exists
L, r > 0 such that for all
x, y ∈ Br (x0 ) = {z ∈ Rn : kz − x0 k < r},
EL2620
Lecture 1
Example:
EL2620
x(t)
27
2012
25
2012
Physical Interpretation
x(T ) = 0
x(0) = x0
Lecture 1
• f being C 1 implies Lipschitz continuity
(L = maxx∈Br (x0 ) f ′ (x))
• f being C 0 is not sufficient (cf., tank example)
• δ = δ(r, L)
Remarks
Appendix C.1. Based on the contraction mapping theorem
has a unique solution in Br (x0 ) over [0, δ]. Proof: See Khalil,
ẋ(t) = f (x(t)),
> 0 such that
Local Existence and Uniqueness
dx
dx
=−
ds
dt
= T − t ⇒ ds = −dt and thus
Theorem:
If f is Lipschitz continuous, then there exists δ
EL2620
Lecture 1
Hint: Reverse time s
28
2012
26
2012
where x is the water level. It is then impossible to know at what time
t < T the level was x(t) = x0 > 0.
√
ẋ = − x,
Consider the reverse example, i.e., the water tank lab process with
EL2620
State-Space Models
ẋ = f (x) + g(x)u,
ẋ = Ax + Bu,
Affine in u:
Linear:
dn y
Transformation to First-Order System
y = Cx
y = h(x)
Lecture 1
x = θ θ̇
T
2
= y
dy
dt
...
dn−1 y
dtn−1
ẋ2 = −
k
g
x2 − sin x1
2
MR
R
ẋ1 = x2
gives
M R θ̈ + k θ̇ + M gR sin θ = 0
Example: Pendulum
express the equation in x
T
Given a differential equation in y with highest derivative dtn ,
EL2620
Lecture 1
ẋ = f (x, u),
Explicit:
y = h(x)
f (x, u, y, ẋ, u̇, ẏ, . . .) = 0
General:
State x, input u, output y
EL2620
31
2012
29
2012
ẋ = f (x, t)
Equilibria
2012
30
2012
f (x∗ , u∗ , y ∗ , 0, 0, . . .) = 0
0 = f (x∗ , u∗ ), y ∗ = h(x∗ )
0 = f (x∗ ) + g(x∗ )u∗ , y ∗ = h(x∗ )
0 = Ax∗ + Bu∗ , y ∗ = Cx∗
Lecture 1
Often the equilibrium is defined only through the state x∗
Linear:
Affine in u:
Explicit:
General:
Corresponds to putting all derivatives to zero:
32
Definition: A point (x∗ , u∗ , y ∗ ) is an equilibrium, if a solution starting
in (x∗ , u∗ , y ∗ ) stays there forever.
EL2620
Lecture 1
ẋ = f (x, xn+1 )
ẋn+1 = 1
is always possible to transform to an autonomous system by
introducing xn+1 = t:
A nonautonomous system
Transformation to Autonomous System
EL2620
Multiple Equilibria
k
g
sin x1
x
−
2
M R2
R
Lecture 1
• When we want to push performance to the limit
Lecture 1
• Phase plane analysis
• When the range of operation is large
Backlash
Dead Zone
Math
Function
eu
Coulomb &
Viscous Friction
Look-Up
Table
Saturation
Next Lecture
• When distinctive nonlinear phenomena are relevant
• Linearization
EL2620
Lecture 1
Relay
Sign
Abs
|u|
Some Common Nonlinearities in Control
Systems
EL2620
• Simulation in Matlab
35
2012
33
2012
• When the system is strongly nonlinear, or not linearizable
When do we need Nonlinear Analysis &
Design?
EL2620
Lecture 1
ẋ1 = ẋ2 = 0 gives x∗2 = 0 and sin(x∗1 ) = 0
ẋ2 = −
ẋ1 = x2
Alternatively in first-order form:
θ̈ = θ̇ = 0 gives sin θ = 0 and thus θ∗ = kπ
M R2 θ̈ + k θ̇ + M gR sin θ = 0
Example: Pendulum
EL2620
36
2012
34
2012
Lecture 2
EL2620 Nonlinear Control
Analysis Through Simulation
= f (t, x, u)
=0
Lecture 2
• Spice, EMTP, ADAMS, gPROMS
Special purpose simulation tools
• Omsim, Dymola, Modelica
http://www.modelica.org
DAE’s F (t, ẋ, x, u)
• ACSL, Simnon, Simulink
ODE’s ẋ
3
2012
1
2012
• Wrap-up of Lecture 1: Nonlinear systems and phenomena
• Modeling and simulation in Simulink
• Phase-plane analysis
Simulation tools:
EL2620
Lecture 2
EL2620
Today’s Goal
Lecture 2
> matlab
>> simulink
EL2620
Lecture 2
Simulink
• Do phase-plane analysis using pplane (or other tool)
• Linearize using Simulink
• Model and simulate in Simulink
You should be able to
EL2620
4
2012
2
2012
An Example in Simulink
Transfer Fcn
Scope
To Workspace2
u
Transfer Fcn
s+1
1
To Workspace
To Workspace1
y
Lecture 2
>> plot(t,y)
7
2012
5
2012
Check “Save format” of output blocks (“Array” instead of “Structure”)
Step
Clock
t
Save Results to Workspace
stepmodel.mdl
EL2620
Lecture 2
Step
s+1
1
File -> New -> Model
Double click on Continous
Transfer Fcn
Step (in Sources)
Scope (in Sinks)
Connect (mouse-left)
Simulation -> Parameters
EL2620
Choose Simulation Parameters
How To Get Better Accuracy
2
3
4
5
6
7
8
9
10
-0.8
Lecture 2
-1
-0.6
-0.8
-1
-0.4
-0.6
0
-0.4
0.2
0
-0.2
0.4
0.2
-0.2
0.6
1
1
0.8
0.4
0
Refine = 1
0.6
0.8
1
Refine adds interpolation points:
0
1
2
3
4
5
Refine = 10
6
7
8
9
10
8
2012
6
2012
Modify Refine, Absolute and Relative Tolerances, Integration method
EL2620
Lecture 2
Don’t forget “Apply”
EL2620
Use Scripts to Document Simulations
1
s
Lecture 2
1
In
In
1
Sum
h
Subsystem
In
q
Gain
1/A
h
q
Subsystem2
In
s
Integrator
1
ḣ = (u − q)/A
p √
q = a 2g h
f(u)
Fcn
1
Out
h
2
q
1
Example: Two-Tank System
The system consists of two identical tank models:
EL2620
11
2012
-1
u2
(s+1)(s+1)
1
1
(s+1)2
Process
y
Linearization in Simulink
s
1
−1
Valve
y
= u2
2012
10
2012
Lecture 2
12
>> A=2.7e-3;a=7e-6,g=9.8;
>> [x0,u0,y0]=trim(’twotank’,[0.1 0.1]’,[],0.1)
x0 =
0.1000
0.1000
u0 =
9.7995e-006
y0 =
0.1000
>> [aa,bb,cc,dd]=linmod(’twotank’,x0,u0);
>> sys=ss(aa,bb,cc,dd);
>> bode(sys)
Linearize about equilibrium (x0 , u0 , y0 ):
EL2620
Lecture 2
Σ
Motor
Lecture 2
r
Example: Control system with valve characteristic f (u)
Nonlinear Control System
Simulink block diagram:
PSfr
EL2620
open_system(’stepmodel’)
set_param(’stepmodel’,’RelTol’,’1e-3’)
set_param(’stepmodel’,’AbsTol’,’1e-6’)
set_param(’stepmodel’,’Refine’,’1’)
tic
sim(’stepmodel’,6)
toc
subplot(2,1,1),plot(t,y),title(’y’)
subplot(2,1,2),plot(t,u),title(’u’)
9
2012
If the block-diagram is saved to stepmodel.mdl,
the following Script-file simstepmodel.m simulates the system:
EL2620
Differential Equation Editor
deedemo1
deedemo3
Homework 1
deedemo2
DEE
Differential Equation
Editor
deedemo4
Lecture 2
Write in English.
• The report should be short and include only necessary plots.
• See the course homepage for a report example
report.
15
2012
13
2012
• Follow instructions in Exercise Compendium on how to write the
• Use your favorite phase-plane analysis tool
EL2620
Lecture 2
Run the demonstrations
>> dee
dee is a Simulink-based differential equation editor
EL2620
Phase-Plane Analysis
Lecture 2
This was the preferred tool last year!
http://math.rice.edu/˜dfield
• Down load DFIELD and PPLANE from
http://www.control.lth.se/˜ictools
• Download ICTools from
EL2620
14
2012
Local Stability
• Stability definitions
• Linearization
• Phase-plane analysis
• Periodic solutions
Lecture 3
EL2620 Nonlinear Control
x(t)
⇒
δ
ǫ
kx(t)k < ǫ, ∀t ≥ 0
= 0 is not stable it is called unstable.
Lecture 3
If x∗
kx(0)k < δ
= f (x) with f (0) = 0
Definition: The equilibrium x∗ = 0 is stable if for all ǫ > 0 there
exists δ = δ(ǫ) > 0 such that
Consider ẋ
EL2620
Lecture 3
EL2620
3
2012
1
2012
Asymptotic Stability
⇒
lim x(t) = 0
t→∞
Lecture 3
4
2012
2
2012
The equilibrium is globally asymptotically stable if it is stable and
limt→∞ x(t) = 0 for all x(0).
kx(0)k < δ
Definition: The equilibrium x = 0 is asymptotically stable if it is
stable and δ can be chosen such that
EL2620
Lecture 3
• Analyze stability of periodic solutions through Poincaré maps
center points
• Classify equilibria into nodes, focuses, saddle points, and
• Sketch phase portraits for two-dimensional systems
• Linearize around equilibria and trajectories
Today’s Goal
• Explain local and global stability
You should be able to
EL2620
Linearization Around a Trajectory
m(t)
Lecture 3
2012
5
2012
7
= (h0 (t), v0 (t), m0 (t))T , u0 (t) ≡ u0 > 0, be a solution.
ḣ(t) = v(t)
v̇(t) = −g + ve u(t)/m(t)
ṁ(t) = −u(t)
Example




0 1
0
0


e u0
˙
 ve  ũ(t)
x̃(t)
= 0 0 − (m0v−u
2  x̃(t) +
m0 −u0 t
0 t)
1
0 0
0
Let x0 (t)
Then,
h(t)
EL2620
Lecture 3
(x0 (t) + x̃(t), u0 (t) + ũ(t))
(x0 (t), u0 (t))
ẋ(t) = f (x0 (t) + x̃(t), u0 (t) + ũ(t))
∂f
(x0 (t), u0 (t))x̃(t)
= f (x0 (t), u0 (t)) +
∂x
∂f
+
(x0 (t), u0 (t))ũ(t) + O(kx̃, ũk2 )
∂u
Let (x0 (t), u0 (t)) denote a solution to ẋ = f (x, u) and consider
another solution (x(t), u(t)) = (x0 (t) + x̃(t), u0 (t) + ũ(t)):
EL2620
∂f
(x0 (t), u0 (t))
∂x
∂f
B(x0 (t), u0 (t)) =
(x0 (t), u0 (t))
∂u
A(x0 (t), u0 (t)) =
−1 + α cos2 t
1 − α sin t cos t
−1 − α sin t cos t −1 + α sin2 t
Lecture 3
is unbounded solution for α
> 1.
8
2012
6
2012
, α>0
√
α − 2 ± α2 − 4
λ(t) = λ =
2
which are stable for 0 < α < 2. However,
(α−1)t
e
cos t e−t sin t
x(t) =
x(0),
−e(α−1)t sin t e−t cos t
Pointwise eigenvalues are given by
A(t) =
Pointwise Left Half-Plane Eigenvalues of
A(t) Do Not Impose Stability
EL2620
Lecture 3
Note that A and B are time dependent. However, if
(x0 (t), u0 (t)) ≡ (x0 , u0 ) then A and B are constant.
where
˙
x̃(t)
= A(x0 (t), u0 (t))x̃(t) + B(x0 (t), u0 (t))ũ(t)
Hence, for small (x̃, ũ), approximately
EL2620
=
∂f
(x0 )
∂x
and α(A)
= 0 needs further investigation.
x(t) = c1 e
λ1 t
v1 + c 2 e
λ2 t
v2 ,
= λ1 v1 etc).
Lecture 3
where the constants c1 and c2 are given by the initial conditions
This implies that
where v1 , v2 are the eigenvectors of A (Av1
−1
e λ1 t 0 eAt = V eΛt V −1 = v1 v2
v
v
1
2
0 e λ2 t
If A is diagonalizable, then
x(t) = eAt x(0), t ≥ 0.
d x1
x
=A 1
x2
dt x2
Linear Systems Revival
Analytic solution:
EL2620
Lecture 3
A proof is given next lecture.
The theorem is also called Lyapunov’s Indirect Method.
The case α(A)
The fundamental result for linear systems theory!
• If α(A) > 0, then x0 is unstable
• If α(A) < 0, then x0 is asymptotically stable
Denote A
1
= f (x) with f ∈ C .
= max Re(λ(A)).
Lyapunov’s Linearization Method
Theorem: Let x0 be an equilibrium of ẋ
EL2620
11
2012
9
2012
x(t) = c1 eλ1 t v1 + c2 eλ2 t v2
Lecture 3
Fast eigenvalue/vector: x(t) ≈ c1 eλ1 t v1 + c2 v2 for small t.
Moves along the fast eigenvector for small t
Slow eigenvalue/vector: x(t) ≈ c2 eλ2 t v2 for large t.
Moves along the slow eigenvector towards x = 0 for large t
Solution:
Given the eigenvalues λ1 < λ2 < 0, with corresponding
eigenvectors v1 and v2 , respectively.
Example: Two real negative eigenvalues
EL2620
Lecture 3
x0 is thus an asymptotically stable equilibrium for the nonlinear
system.
ẋ2 = cos x2 − x31 − 5x2
ẋ1 = −x21 + x1 + sin x2
Example
= (1, 0)T is given by
−1 1
A=
,
λ(A) = {−2, −4}
−3 −5
at the equilibrium x0
The linearization of
EL2620
12
2012
10
2012
center point
Example—Unstable Focus
unstable focus
Re λi = 0
λ1 < 0 < λ2
saddle point
2012
Lecture 3
θ̇ = ω
ṙ = σr
15
σ −ω
ẋ =
x,
σ, ω > 0,
λ1,2 = σ ± iω
ω σ
−1
σt iωt
1
1
1
1
e
e
0
x(0)
x(t) = eAt x(0) =
−i i
0
eσt e−iωt −i i
p
In polar coordinates r =
x21 + x22 , θ = arctan x2 /x1
(x1 = r cos θ , x2 = r sin θ ):
EL2620
Lecture 3
stable focus
Re λi > 0
Im λi 6= 0 : Re λi < 0
unstable node
λ1 , λ2 > 0
stable node
Im λi = 0 : λ1 , λ2 < 0
Six cases:
The location of the eigenvalues λ(A) determines the characteristics
of the trajectories.
13
2012
Phase-Plane Analysis for Linear Systems
EL2620
Lecture 3
EL2620
x1
λ1,2 = 1 ± i
Re λ
Im λ
x2
λ1,2 = 0.3 ± i
Equilibrium Points for Linear Systems
Lecture 3
EL2620
16
2012
14
2012
(λ1 , λ2 ) = (−1, −2) and
v1 v2
−1 1
ẋ =
x
0 −2
Lecture 3
Hint: Two cases; only one linear independent eigenvector or all
vectors are eigenvectors
= λ2 ?
1 −1
=
0 1
5 minute exercise: What is the phase portrait if λ1
EL2620
Lecture 3
Example—Stable Node
v1 is the slow direction and v2 is the fast.
EL2620
19
2012
17
2012
+ c3
2012
18
2012
ẋ = f (x) = Ax + g(x),
Lecture 3
Remark: If the linearized system has a center, then the nonlinear
system has either a center or a focus.
with limkxk→0 kg(x)k/kxk = 0. If ż = Az has a focus, node, or
saddle point, then ẋ = f (x) has the same type of equilibrium at the
origin.
Theorem: Assume
Close to equilibrium points “nonlinear system” ≈ “linear system”
20
Phase-Plane Analysis for Nonlinear Systems
EL2620
Lecture 3
Fast: x2 = −x1
Slow: x2 = 0
EL2620
How to Draw Phase Portraits
Lecture 3
+ nπ, 0) since
x2 (t) + KT
−1
sin(θin − x1 (t))
ẋ1 = 0 ⇒ x2 = 0
ẋ2 = 0 ⇒ sin(θin − x1 ) = 0 ⇒ x1 = θin + nπ
Equilibria are (θin
ẋ2 (t) = −T
−1
ẋ1 (t) = x2 (t)
= (θout , θ̇out ), K, T > 0, and θin (t) ≡ θin .
Phase-Plane Analysis of PLL
dee or pplane
Let (x1 , x2 )
EL2620
Lecture 3
1. Matlab:
By computer:
5. Guess solutions
4. Try to find possible periodic orbits
dx2
ẋ1
=
dx1
ẋ2
3. Sketch (ẋ1 , ẋ2 ) for some other points. Notice that
2. Sketch local behavior around equilibria
1. Find equilibria
By hand:
EL2620
23
2012
21
2012
sin(·)
K
θ̇out
1 + sT
Filter
1
s
VCO
Classification of Equilibria
−
e
Phase
Detector
λ2 + T −1 λ + KT −1 = 0
Lecture 3
>0
λ2 + T −1 λ − KT −1 = 0
Saddle points for all K, T
n odd:
K > (4T )−1 gives stable focus
0 < K < (4T )−1 gives stable node
n even:
Linearization gives the following characteristic equations:
EL2620
Lecture 3
θin
sin
θout
“θout ”
= A sin[ωt + θin (t)].
Phase-Locked Loop
A PLL tracks phase θin (t) of a signal sin (t)
EL2620
24
2012
22
2012
Phase-Plane for PLL
Lecture 3
Only r
gives
ṙ
θ̇
1
=
r
r cos θ r sin θ
− sin θ cos θ
θ̇ = 1
ṙ = r(1 − r2 )
ẋ2 = r(1 − r ) sin θ + r cos θ
2
ẋ1
ẋ2
ẋ2 = sin θṙ + r cos θθ̇
ẋ1 = cos θṙ − r sin θθ̇
ẋ1 = r(1 − r2 ) cos θ − r sin θ
⇒
⇒
= 1 is a stable equilibrium!
Now, from (1)
x2 = r sin θ
x1 = r cos θ
Periodic solution: Polar coordinates.
implies
EL2620
Lecture 3
(K, T ) = (1/2, 1): focuses 2kπ, 0 , saddle points (2k + 1)π, 0
EL2620
27
2012
25
2012
Periodic Solutions
∀t ≥ 0
>0
Lecture 3
Note that x(t)
≡ const is by convention not regarded periodic
• When is it stable?
• When does there exist a periodic solution?
A periodic orbit is the image of x in the phase portrait.
x(t + T ) = x(t),
A system has a periodic solution if for some T
EL2620
Lecture 3
ẋ2 = x1 + x2 − x2 (x21 + x22 )
ẋ1 = x1 − x2 − x1 (x21 + x22 )
Example of an asymptotically stable periodic solution:
EL2620
28
2012
26
(1)
2012
φt (x0 )
= f (x) is sometimes denoted
Flow
P (x∗ ) = x∗
= x∗ corresponds to a periodic orbit.
x∗ is called a fixed point of P .
Lecture 3
∈ Rn
Existence of Periodic Orbits
A point x∗ such that P (x∗ )
EL2620
Lecture 3
φt (·) is called the flow.
to emphasize the dependence on the initial point x0
The solution of ẋ
EL2620
31
2012
29
2012
Poincaré Map
x
Lecture 3
32
2012
30
2012
1 minute exercise: What does a fixed point of P k corresponds to?
EL2620
Lecture 3
P (x)
Σ
φt (x)
P (x) = φτ (x) (x)
where τ (x) is the time of first return.
Let Σ
⊂ Rn be an n − 1-dim hyperplane transverse to f at x0 .
Definition: The Poincaré map P : Σ → Σ is
Assume φt (x0 ) is a periodic solution with period T .
EL2620
Stable Periodic Orbit
e−4π 0
0
1
Lecture 3
⇒ Stable periodic orbit (as we already knew for this example) !
dP
W =
(1, 2πk) =
d(r0 , θ0 )
The periodic solution that corresponds to (r(t), θ(t))
asymptotically stable because
= (1, t) is
[1 + (r0−2 − 1)e−2·2π ]−1/2
θ0 + 2π
(r0 , θ0 ) = (1, 2πk) is a fixed point.
P (r0 , θ0 ) =
The Poincaré map is
EL2620
Lecture 3
• If |λi (W )| > 1 for some i, then the periodic orbit is unstable.
orbit is asymptotically stable
• If |λi (W )| < 1 for all i 6= j , then the corresponding periodic
• λj (W ) = 1 for some j
if x is close to x∗ .
P (x) ≈ W x
The linearization of P around x∗ gives a matrix W such that
EL2620
35
2012
33
2012
[1 +
(r0−2
Lecture 3
−2t −1/2
τ (r0 , θ0 ) = 2π.
∈ Σ is
− 1)e
First return time from any point (r0 , θ0 )
φt (r0 , θ0 ) =
= {(r, θ) : r > 0, θ = 2πk}.
The solution is
Choose Σ
θ̇ = 1
ṙ = r(1 − r2 )
]
, t + θ0
Example—Stable Unit Circle
Rewrite (1) in polar coordinates:
EL2620
34
2012
• Lyapunov methods for stability analysis
Lecture 4
EL2620 Nonlinear Control
Lecture 4
If the total energy is dissipated, the system must be stable.
Formalized the idea:
Doctoral thesis “The general problem of the stability of motion,” 1892.
Master thesis “On the stability of ellipsoidal forms of equilibrium of
rotating fluids,” St. Petersburg University, 1884.
3
2012
1
2012
Alexandr Mihailovich Lyapunov (1857–1918)
EL2620
Lecture 4
EL2620
spring
+
Fspring ds
V (0, 0) = 0
0
Rx
4
2012
2
2012
d
V (x, ẋ) = mẋẍ + k0 xẋ + k1 x3 ẋ = −b|ẋ|3 < 0, ẋ 6= 0
dt
• Change in energy along any solution x(t)
Lecture 4
mv 2
2
b, k0 , k1 > 0
V (x, ẋ) = mẋ2 /2 + k0 x2 /2 + k1 x4 /4 > 0,
• Total energy = kinetic + potential energy: V =
damping
mẍ = − bẋ|ẋ| − k0 x − k1 x3 ,
| {z } | {z }
x
m
A Motivating Example
• Balance of forces yields
EL2620
Lecture 4
LaSalle’s invariant set theorem
• Prove stability of a set (e.g., a periodic orbit) using
Lyapunov’s method
Today’s Goal
• Prove local and global stability of equilibria using
You should be able to
EL2620
⇒
kx(t)k < ǫ, ∀t ≥ 0
⇒
lim x(t) = 0
t→∞
Lyapunov Function
Lecture 4
V̇ (x) =
x1
V
x2
∂V
∂V
d
V (x) =
ẋ =
f (x) ≤ 0
dt
∂x
∂x
Condition (3) means that V is non-increasing along all trajectories
in Ω:
A function V that fulfills (1)–(3) is called a Lyapunov function.
EL2620
Lecture 4
Globally asymptotically stable, if asymptotically ∀x(0)
kx(0)k < δ
∈ Rn .
> 0 there exists δ > 0 such that
Locally asymptotically stable, if locally stable and
kx(0)k < δ
Locally stable, if for every ǫ
= 0 of ẋ = f (x) is
Stability Definitions
Recall from Lecture 3 that an equilibrium x
EL2620
7
2012
5
2012
∂V
f (x)
∂x
∂V
∂x
to the level surface
= 0, i.e. the vector field
2012
6
2012
∂V
f (x)
∂x
≤ 0, i.e. the vector field
to the level surface V (x) = c make an
V̇ (x) =
Lecture 4
Example: Total energy of a mechanical system with damping or total
fluid in a system that leaks.
f (x) and the normal
obtuse angle.
∂V
∂x
Dissipation of energy:
8
Example: Total energy of a lossless mechanical system or total fluid in
a closed system.
f (x) is everywhere orthogonal to the normal
V (x) = c.
V̇ (x) =
Conservation and Dissipation
Conservation of energy:
EL2620
Lecture 4
The result is called Lyapunov’s Direct Method
= 0 is locally asymptotically stable.
V̇ (x) < 0 for all x ∈ Ω, x 6= 0
then x
(4)
= 0 is locally stable. Furthermore, if
V̇ (x) ≤ 0 for all x ∈ Ω
(3)
then x
V (x) > 0 for all x ∈ Ω, x 6= 0
V (0) = 0
(2)
(1)
∈ Ω ⊂ Rn . If there
Lyapunov Stability Theorem
Theorem: Let ẋ = f (x), f (0) = 0, and 0
exists a C1 function V : Ω → R such that
EL2620
V (x) =constant
f (x)
0
10
1
2
3
4
x1
0
-10
-2
0
x2
2
V (x) = (1 − cos x1 )g/ℓ + x22 /2
g
ẋ1 = x2 , ẋ2 = − sin x1
ℓ
Lyapunov function candidate:
Lecture 4
x(t)
Example—Pendulum
Is the origin stable for a mathematical pendulum?
EL2620
Lecture 4
Trajectories can only go to lower values of V (x)
dV
dx
Geometric interpretation
Vector field points into sublevel sets
EL2620
11
2012
9
2012
0
t
V̇ (x(τ ))dτ ≤ V (x(0))
Conservation of energy!
Lecture 4
12
2012
10
2012
0 is not asymptotically stable, so, of course, (4) is not
6< 0, ∀x 6= 0.
= 0 is locally stable.
Note that x =
fulfilled: V̇ (x)
Hence, x
V̇ (x) = gℓ ẋ1 sin x1 + x2 ẋ2 = 0, ∀x
V (x) > 0 for −2π < x1 < 2π and (x1 , x2 ) 6= 0
(2)
(3)
V (0) = 0
(1)
EL2620
Lecture 4
For stability it is thus important that the sublevel sets
{z | V (z) ≤ c)} are locally bounded.
{z | V (z) ≤ V (x(0))}
which means that the whole trajectory lies in the set
Z
Boundedness:
V ((x(t)) = V (x(0)) +
For an trajectory x(t)
EL2620
Radial Unboundedness is Necessary
2012
13
Lecture 4
-2
-10
-1.5
-1
-0.5
0
0.5
1
1.5
2
-8
Example: Assume V (x)
-6
-4
-2
0
x1
2
4
6
8
10
15
= x21 /(1 + x21 ) + x22 is a Lyapunov
function for some system. Then might x(t) → ∞ even if V̇ (x) < 0,
as shown by the contour plot of V (x):
If (4) is not fulfilled, then global stability cannot be guaranteed.
EL2620
Lecture 4
V (x) > 0 for all x 6= 0
V̇ (x) ≤ −αV (x) for all x
(2)
(3)
Lecture 4
then x
≤ c} are bounded for all c ≥ 0
= 0 is globally asymptotically stable.
(4) The sublevel sets {x|V (x)
V (0) = 0
(1)
2012
14
16
= 0. If there exists a C1 function
Somewhat Stronger Assumptions
= 0 is globally asymptotically stable.
Theorem: Let ẋ = f (x) and f (0)
V : Rn → R such that
EL2620
Lecture 4
then x
V (x) → ∞ as kxk → ∞
V̇ (x) < 0 for all x 6= 0
(3)
V (x) = xT x
(4)
V (x) > 0 for all x 6= 0
(2)
Can you say something about global stability of the equilibrium?
V (0) = 0
2012
= 0. If there exists a C1 function
Lyapunov Theorem for
Global Asymptotic Stability
Theorem: Let ẋ = f (x) and f (0)
V : Rn → R such that
EL2620
(1)
2012
For what values of ǫ is the steady-state (0, 0) locally stable? Hint: try
the ”standard” Lyapunov function
ẋ2 = −x1 (t) − ǫx21 (t)x2 (t)
ẋ1 = x2 (t)
5 minute exercise: Consider Example 2 from Lecture 3:
EL2620
x2
2012
→ 0, t → ∞.
Positive Definite Matrices
→ 0, t → ∞.
2012
17
Lecture 4
19
Note that if M = M T is positive definite, then the Lyapunov function
candidate V (x) = xT M x fulfills V (0) = 0 and V (x) > 0,
∀x 6= 0.
• M = M T is positive semidefinite ⇐⇒ λi (M ) ≥ 0, ∀i
• M = M T is positive definite ⇐⇒ λi (M ) > 0, ∀i
Lemma:
Definition: A matrix M is positive definite if xT M x > 0 for all
x 6= 0. It is positive semidefinite if xT M x ≥ 0 for all x.
EL2620
Lecture 4
Using the properties of V it follows that x(t)
Hence, V (x(t))
log V (x(t)) − log V (x(0) ≤ −αt ⇒ V (x(t)) ≤ e−αt V (x(0))
V̇ (x)
≤ −α
V (x)
6= 0 (otherwise we have x(τ ) = 0 for all τ > t). Then
Proof Idea
Integrating from 0 to t gives
Assume x(t)
EL2620
Converse Lyapunov theorems
= M T . Then
Symmetric Matrices
Lecture 4
20
2012
18
2012
Hint: Use the factorization M = U ΛU T , where U is an orthogonal
matrix (U U T = I ) and Λ = diag(λ1 , . . . , λn ).
λmin (M )kxk2 ≤ xT M x ≤ λmax (M )kxk2
Assume that M
EL2620
Lecture 4
There exist also Lyapunov instability theorems!
then there is a Lyapunov function that proves that it is globally
asymptotically stable.
||x(t)|| ≤ M e−βt ||x(0)||, M > 0, β > 0
Example: If the system is globally exponentially stable
EL2620
P A + AT P = −Q
< 0 ∀t if there exist a Q = QT > 0 such that
Q
t
Lecture 4
23
Lecture 4
Theorem: Let x0 be an equilibrium of ẋ = f (x) with f
∂f
Denote A = ∂x (x0 ) and α(A) = max Re(λ(A)).
> 0 then x0 is unstable
Qe dt z = z P z
At
(2) If α(A)
e
AT t
Recall from Lecture 3:
∈ C1 .
Lyapunov’s Linearization Method
< 0 then x0 is asymptotically stable
0
∞
EL2620
Lecture 4
24
2012
22
R∞ T
= 0 eA t QeAt dt. Then
Z ∞
T AT t
At
AT t
At
T
A e Qe + e Qe A dt
A P + PA =
0
Z ∞
h T
i∞
d AT t At
A t
At
=
e Qe
= −Q
dt = e Qe
dt
0
0
Proof: Choose P
P A + AT P = −Q
(1) If α(A)
0
= Ax, x(0) = z . Then
Z
∞
T
t
x (t)Qx(t)dt = z
2012
21
Converse Theorem for Linear Systems
2012
If Re λi (A) < 0, then for every symmetric positive definite Q there
exist a symmetric positive definite matrix P such that
EL2620
Thus v(z) = z T P z is the cost-to-go from the point z (no input) with
integral quadratic cost function using weighting matrix Q.
Z
Assume ẋ
EL2620
Interpretation
Global Asymptotic Stability: If Q is positive definite, then the
Lyapunov Stability Theorem implies global asymptotic stability, and
hence the eigenvalues of A must satisfy Re λi (A) < 0 for all i
Lecture 4
2012
V̇ (x) = xT P ẋ + ẋT P x = xT (P A + AT P ) x = −xT Qx
|
{z
}
Thus, V̇
⇒
V (x) = xT P x, P = P T > 0
Lyapunov equation: Consider the quadratic function
ẋ = Ax
Lyapunov Stability for Linear Systems
Linear system:
EL2620
T
T
xT Qx ≥ λmin (Q)kxk2
= −xT Qx + 2xT P g(x)
= x (P A + A P )x + 2x P g(x)
T
= xT P [Ax + g(x)] + [xT AT + g T (x)]P x
V (x) > 0 for all x 6= 0
V̇ (x) ≤ 0 for all x
(2)
(3)
Lecture 4
then x
= f (x) such that V̇ (x) = 0 is x(t) = 0
27
= 0. If there exists a C1 function
= 0 is globally asymptotically stable.
(5) The only solution of ẋ
for all t
V (x) → ∞ as kxk → ∞
V (0) = 0
(1)
Theorem: Let ẋ = f (x) and f (0)
V : Rn → R such that
(4)
2012
25
2012
LaSalle’s Theorem for Global Asymptotic
Stability
EL2620
Lecture 4
• we need to show that k2xT P g(x)k < λmin (Q)kxk2
where
T
V̇ (x) = x P f (x) + f (x)P x
T
Let f (x) = Ax + g(x) where limkxk→0 kg(x)k/kxk = 0. The
Lyapunov function candidate V (x) = xT P x satisfies V (0) = 0,
V (x) > 0 for x 6= 0, and
Proof of (1) in Lyapunov’s Linearization
EL2620
∀kxk < r
V̇ < −λmin (Q)kxk2 + 2γλmax (P )kxk2
kg(x)k < γkxk,
> 0 there exists r > 0 such that
Lecture 4
Hence, x(t) = 0 is the only possible trajectory for which V̇ (x)
and the LaSalle theorem gives global asymptotic stability.
which means that ẋ(t) can not stay constant.
28
2012
26
2012
= 0,
= 0, x(t) 6= 0. Then
d
k0
k1
ẋ(t) = − x(t) − x3 (t) 6= 0,
dt
m
m
Assume that there is a trajectory with ẋ(t)
V̇ (x) = −b|ẋ|
3
V (x) = (2mẋ2 + 2k0 x2 + k1 x4 )/4 > 0,
V (0, 0) = 0
A Motivating Example (cont’d)
1 λmin (Q)
2 λmax (P )
mẍ = −bẋ|ẋ| − k0 x − k1 x3
EL2620
Lecture 4
γ<
which becomes strictly negative if we choose
Thus,
For all γ
EL2620
Invariant Sets
M
= f (x), if
2012
Lecture 4
V̇ (x) =
31
d
∂V
f (x) = 2(x21 + x22 − 1) (x21 + x22 − 1)
∂x
dt
2
2
= −2(x1 + x2 − 1)2 (x21 + x22 ) ≤ 0, ∀x ∈ Ω
It is possible to show that Ω = {kxk ≤ R} is invariant for sufficiently
large R > 0. Let V (x) = (x21 + x22 − 1)2 .
ẋ2 = x1 + x2 − x2 (x21 + x22 )
ẋ1 = x1 − x2 − x1 (x21 + x22 )
: kxk = 1} ∪ {0} for
Example—Periodic Orbit
Show that x(t) approaches {x
EL2620
Lecture 4
29
2012
Definition: x(t) approaches a set M as t → ∞, if for each ǫ > 0
there is a T > 0 such that dist(x(t), M ) < ǫ for all t > T . Here
dist(p, M ) = inf x∈M kp − xk.
x(0)
x(t)
Definition: A set M is invariant with respect to ẋ
x(0) ∈ M implies that x(t) ∈ M for all t ≥ 0.
EL2620
LaSalle’s Invariant Set Theorem
0
E
M
M
V̇ (x)
x
2012
Lecture 4
32
d 2
(x + x22 − 1) = −2(x21 + x22 − 1)(x21 + x22 ) = 0 for x ∈ M
dt 1
= E because
E = {x ∈ Ω : V̇ (x) = 0} = {x : kxk = 1} ∪ {0}
The largest invariant set of E is M
EL2620
Lecture 4
Note that V does not have to be a positive definite function.
Ω
E
30
2012
for x ∈ Ω. Let E be the set of points in Ω where V̇ (x) = 0. If M is
the largest invariant set in E , then every solution with x(0) ∈ Ω
approaches M as t → ∞.
Theorem: Let Ω ⊂ Rn be a compact set invariant with respect to
ẋ = f (x). Let V : Rn → R be a C 1 function such that V̇ (x) ≤
EL2620
−
f (·)
G(s)
y
S
History
u
y
• Input–output stability
Lecture 5
EL2620 Nonlinear Control
Lecture 5
• Solution by Popov (1960)
• Kalman’s conjecture (1957)
(False!)
(False!)
y
f (y)
(Led to the Circle Criterion)
• Aizerman’s conjecture (1949)
• Luré and Postnikov’s problem (1944)
For what G(s) and f (·) is the closed-loop system stable?
r
EL2620
Lecture 5
EL2620
3
2012
1
2012
2012
S
y
Lecture 5
Question: How should we measure the size of u and y ?
4
Here S can be a constant, a matrix, a linear time-invariant system, etc
The gain γ of S is the largest amplification from u to y
u
2
2012
Idea: Generalize the concept of gain to nonlinear dynamical systems
EL2620
Lecture 5
– Passivity
– Circle Criterion
– Small Gain Theorem
• analyze stability using
Gain
Today’s Goal
• derive the gain of a system
You should be able to
EL2620
and
+
Norms
x21
+ ··· +
Lecture 5
Why? What amplification is described by the eigenvalues?
is not a gain (nor a norm).
ρ(M ) = max |λi (M )|
i
Eigenvalues are not gains
kxk = max{|x1 |, . . . , |xn |}
kxk =
p
The spectral radius of a matrix M
EL2620
Lecture 5
Max norm:
Euclidean norm:
Examples:
• kαxk = |α| · kxk, for all α ∈ R
x2n
kxk = 0 ⇔ x = 0
: Ω → R , such that for all x, y ∈ Ω
• kx + yk ≤ kxk + kyk
• kxk ≥ 0
Definition:
A norm is a function k · k
A norm k · k measures size
EL2620
7
2012
5
2012
Σ = diag {σ1 , . . . , σn } ; U ∗ U = I ; V ∗ V = I
M = U ΣV ∗
∈ Cn×n has a singular value decomposition
Gain of a Matrix
kM xk
kxk
Lecture 5
sup-norm:
kxk2 =
0
qR
∞
kxk∞ = supt∈R+ |x(t)|
2-norm (energy norm):
Examples:
|x(t)|2 dt
A signal norm k · kk is a norm on the space of signals x.
: R+ → R.
Signal Norms
A signal x is a function x
EL2620
Lecture 5
where k · k is the Euclidean norm.
x∈Rn
σmax (M ) = σ1 = sup
The “gain” of M is the largest singular value of M :
σi - the singular values
where
Every matrix M
EL2620
8
2012
6
2012
Parseval’s Theorem
Z
0
∞
=
Z
0
∞
Z
−∞
∞
1
|x(t)| dt =
2π
2
1
y(t)x(t)dt =
2π
Z
−∞
∞
|X(iω)|2 dω.
Y ∗ (iω)X(iω)dω.
0
Lecture 5
u
S2
y
≤ γ(S1 )γ(S2 ).
S1
2 minute exercise: Show that γ(S1 S2 )
EL2620
Lecture 5
11
2012
The power calculated in the time domain equals the power calculated
in the frequency domain
kxk22
In particular,
then
0
∈ L2 have the Fourier transforms
Z ∞
Z ∞
−iωt
e−iωt y(t)dt,
e
x(t)dt,
Y (iω) =
X(iω) =
Theorem: If x, y
9
2012
L2 denotes the space of signals with bounded energy: kxk2 < ∞
EL2620
u∈L2
γ(S) = sup
S
f (·)
u∈L2
y(t)
x∗
Kx
x
f (x)
≤ K|x| and
Gain of a Static Nonlinearity
Lecture 5
R∞
R∞
kyk22 = 0 f 2 u(t) dt ≤ 0 K 2 u2 (t)dt = K 2 kuk22 ,
where u(t) = x∗ , t ∈ (0, 1), gives equality, so
γ(f ) = sup kyk2 kuk2 = K
Proof:
u(t)
= αu(t) is
|α|kuk2
kαuk2
= sup
= |α|
kuk2
u∈L2 kuk2
Lemma: A static nonlinearity f such that |f (x)|
f (x∗ ) = Kx∗ has gain γ(f ) = K .
EL2620
Lecture 5
u∈L2
γ(α) = sup
12
2012
10
2012
kyk2
kS(u)k2
= sup
kuk2 u∈L2 kuk2
y
Example: The gain of a scalar static system y(t)
The gain of S is defined as
u
y = S(u).
System Gain
A system S is a map from L2 to L2 :
EL2620
γ(G)
Z ∞
1
=
|Y (iω)|2 dω
2π −∞
Z ∞
1
=
|G(iω)|2 |U (iω)|2 dω ≤ K 2 kuk22
2π −∞
r1
e1
S2
S1
e2
r2
The Small Gain Theorem
Lecture 5
13
15
2012
Theorem: Assume S1 and S2 are BIBO stable. If γ(S1 )γ(S2 ) < 1,
then the closed-loop system is BIBO stable from (r1 , r2 ) to (e1 , e2 )
EL2620
Lecture 5
1
10
2012
(0, ∞) and |G(iω ∗ )| = K
0
10
|G(iω)|
Arbitrary close to equality by choosing u(t) close to sin ω ∗ t.
kyk22
Proof: Assume |G(iω)| ≤ K for ω ∈
for some ω ∗ . Parseval’s theorem gives
10 -1
10
-2
-1
10
0
10
10
1
Gain of a Stable Linear System
kGuk2
γ G = sup
= sup |G(iω)|
u∈L2 kuk2
ω∈(0,∞)
Lemma:
EL2620
BIBO Stability
S
y
f (·)
0≤
Lecture 5
Ky
y
f (y)
∈ (0, 1/2).
16
2012
14
2012
f (y)
≤ K, ∀y 6= 0, f (0) = 0
y
Small Gain Theorem gives BIBO stability for K
γ(G) = 2 and γ(f ) ≤ K .
2
,
(s + 1)2
−
G(s)
y
Example—Static Nonlinear Feedback
G(s) =
r
EL2620
Lecture 5
< ∞.
kS(u)k2
γ S = sup
kuk2
u∈L2
= Ax is asymptotically stable then
G(s) = C(sI − A)−1 B + D is BIBO stable.
Example: If ẋ
u
Definition:
S is bounded-input bounded-output (BIBO) stable if γ(S)
EL2620
ke1 k2 ≤
kr1 k2 + γ(S2 )kr2 k2
1 − γ(S2 )γ(S1 )
ke1 k2 ≤ kr1 k2 + γ(S2 )[kr2 k2 + γ(S1 )ke1 k2 ]
“Proof” of the Small Gain Theorem
kr2 k2 + γ(S1 )kr1 k2
ke2 k2 ≤
1 − γ(S1 )γ(S2 )
2012
17
2012
Lecture 5
r
−
f (·)
G(s)
y
-1
-0.5
-1
0
0.5
1
-0.5
0
0.5
1
1.5
G(iω)
2
19
Let f (y) = Ky in the previous example. Then the Nyquist Theorem
proves stability for all K ∈ [0, ∞), while the Small Gain Theorem
only proves stability for K ∈ (0, 1/2).
Small Gain Theorem can be Conservative
EL2620
Lecture 5
Note: Formal proof requires k · k2e , see Khalil
so also e2 is bounded.
Similarly we get
γ(S2 )γ(S1 ) < 1, kr1 k2 < ∞, kr2 k2 < ∞ give ke1 k2 < ∞.
gives
EL2620
−
G(s)
-1
-1
-0.5
0
0.5
1
-0.5
The Nyquist Theorem
0
From: U(1)
0.5
1
−
f (·)
G(s)
y
y
k1 y
k2 y f (y)
− k11
The Circle Criterion
− k12
G(iω)
2012
18
2012
f (y)
≤ k2 , ∀y 6= 0,
y
f (0) = 0
Lecture 5
If the Nyquist curve of G(s) does not encircle or intersect the circle
defined by the points −1/k1 and −1/k2 , then the closed-loop
system is BIBO stable.
0 ≤ k1 ≤
20
Theorem: Assume that G(s) has no poles in the right half plane, and
r
EL2620
Lecture 5
Theorem: If G has no poles in the right half plane and the Nyquist
curve G(iω), ω ∈ [0, ∞), does not encircle −1, then the
closed-loop system is stable.
EL2620
To: Y(1)
2012
y
f (y)
-1
-0.5
-0.5
1
−
K
-1
0
0.5
1
0
0.5
1
2
−fe(·)
e
G(s)
r2
R
−k
1
G(iω)
Lecture 5
e
|G(iω)|
1
=
1
+k >R
G(iω)
23
2012
21
∈ (0, 4).
< 1, where
G
e=
is stable (This has to be checked later). Hence,
G
1 + kG
e
Small Gain Theorem gives stability if |G(iω)|R
re1
EL2620
Lecture 5
the Circle Criterion gives that the system is BIBO stable if K
= −1/K .
1.5
G(iω)
The “circle” is defined by −1/k1 = −∞ and −1/k2
Since
min Re G(iω) = −1/4
Ky
Example—Static Nonlinear Feedback (cont’d)
EL2620
y2
e1
−f (·)
G(s)
e2
y1
r2
re1
1
G(iω)
− k11
− k12
Lecture 5
r2
y1
G(iω)
G
is stable since −1/k is inside the circle.
1 + kG
R
− k1
Note that G(s) may have poles on the imaginary axis, e.g.,
integrators are allowed
Note that
−k2
−fe(·)
G
−k
fe(0) = 0
2012
22
2012
24
∈ C : |z + k| > R} mapped
e
G
fe(y)
k2 − k1
=: R, ∀y 6= 0,
≤
y
2
= (k1 + k2 )/2, fe(y) = f (y) − ky , and re1 = r1 − kr2 :
Proof of the Circle Criterion
The curve G−1 (iω) and the circle {z
through z 7→ 1/z gives the result:
EL2620
Lecture 5
r1
Let k
EL2620
Passivity and BIBO Stability
u
S
y
Passive System
> 0 such that
Lecture 5
Warning: There exist many other definitions for strictly passive
hy, uiT ≥ ǫ(|y|2T + |u|2T ), for all T > 0 and all u
and strictly passive if there exists ǫ
27
2012
Definition: Consider signals u, y : [0, T ] → Rm . The system S is
passive if
hy, uiT ≥ 0, for all T > 0 and all u
EL2620
Lecture 5
25
2012
The main result: Feedback interconnections of passive systems are
passive, and BIBO stable (under some additional mild criteria)
EL2620
y T (t)u(t)dt
S
y
cos φ =
hy, uiT
=0
|y|T |u|T
u = sin t and y = cos t are orthogonal if T = kπ ,
hy, uiT ≤ |y|T |u|T
Lecture 5
2 minute exercise: Is the pure delay system y(t) = u(t − θ)
passive? Consider for instance the input u(t) = sin ((π/θ)t).
EL2620
Lecture 5
Example:
because
Cauchy-Schwarz Inequality:
28
2012
26
2012
= |y|T |u|T cos φ
p
|y|T = hy, yiT - length of y , φ - angle between u and y
0
T
If u and y are interpreted as vectors then hy, uiT
hy, uiT =
Z
u
Scalar Product
Scalar product for signals y and u
EL2620
− ǫ) ≥ 0,
Lecture 5
passive.
1
is strictly passive,
G(s) =
s+1
1
is passive but not strictly
G(s) =
s
Example:
Re G(iω
-0.4
-0.2
0
0.2
0.4
0.6
0
0.2
0.4
0.6
G(iω)
0.8
1
> 0 such that
∀ω > 0
It is strictly passive if and only if there exists ǫ
31
2012
du
Cu2 (T )
dt =
≥0
dt
2
di
Li2 (T )
L i(t)dt =
≥0
dt
2
u(t)C
Ri2 (t)dt ≥ Rhi, iiT ≥ 0
Passivity of Linear Systems
T
T
0
T
Theorem: An asymptotically stable linear system G(s) is passive if
and only if
Re G(iω) ≥ 0,
∀ω > 0
EL2620
Lecture 5
Z
di
u = L : hu, iiT =
dt
0
0
Z
du
: hu, iiT =
i=C
dt
u(t) = Ri(t) : hu, iiT =
Z
29
2012
Example—Passive Electrical Components
EL2620
y2
−
e1
S2
S1
e2
y1
r2
= hy1 , r1 iT + hy2 , r2 iT = hy, riT
≥ 0 if hy1 , e1 iT ≥ 0 and hy2 , e2 iT ≥ 0.
S
= supu∈L2
y
kyk2
kuk2
< ∞.
Lecture 5
Hence, ǫkyk22
1
kyk2 ≤ kuk2
ǫ
≤ kyk2 · kuk2 , so
ǫ(|y|2∞ + |u|2∞ ) ≤ hy, ui∞ ≤ |y|∞ · |u|∞ = kyk2 · kuk2
Proof:
Lemma: If S is strictly passive then γ(S)
u
32
2012
A Strictly Passive System Has Finite Gain
EL2620
Lecture 5
Hence, hy, riT
hy, eiT = hy1 , e1 iT + hy2 , e2 iT = hy1 , r1 − y2 iT + hy2 , r2 + y1 iT
Proof:
30
2012
Lemma: If S1 and S2 are passive then the closed-loop system from
(r1 , r2 ) to (y1 , y2 ) is also passive.
r1
Feedback of Passive Systems is Passive
EL2620
r1
y2
−
e1
S2
S1
e2
y1
r2
The Passivity Theorem
y2
−
e1
S2
S1
e2
y1
r2
φ1
The Passivity Theorem is a
“Small Phase Theorem”
φ2
Lecture 5
S2 passive ⇒ cos φ2 ≥ 0 ⇒ |φ2 | ≤ π/2
S1 strictly passive ⇒ cos φ1 > 0 ⇒ |φ1 | < π/2
r1
EL2620
Lecture 5
Theorem: If S1 is strictly passive and S2 is passive, then the
closed-loop system is BIBO stable from r to y .
EL2620
35
2012
33
2012
Proof
1
≤ 2hy2 , r2 iT + hy, riT ≤
ǫ
−
G(s)
→ ∞ and the result follows.
|y|2T
1
2+
|y|T |r|T
ǫ
1
|y1 |2T + |y2 |2T − 2hy2 , r2 iT + |r1 |2T ≤ hy, riT
ǫ
1
|y1 |2T + hr1 − y2 , r1 − y2 iT ≤ hy, riT
ǫ
2012
34
2012
Lecture 5
36
2 minute exercise: Apply the Passivity Theorem and compare it with
the Nyquist Theorem. What about conservativeness? [Compare the
discussion on the Small Gain Theorem.]
EL2620
Lecture 5
Let T
Hence
or
Therefore
S1 strictly passive and S2 passive give
ǫ |y1 |2T + |e1 |2T ≤ hy1 , e1 iT + hy2 , e2 iT = hy, riT
EL2620
Example—Gain Adaptation
∗
u
−
c
s
G(s)
− θ
θ
(θ − θ∗ )u
S
ym − y
Gain Adaptation is BIBO Stable
c > 0.
ym
y
Lecture 5
S is passive (see exercises).
If G(s) is strictly passive, the closed-loop system is BIBO stable
EL2620
Lecture 5
G(s)
θ(t)
Model
G(s)
θ∗
Process
dθ
= −cu(t)[ym (t) − y(t)],
dt
Adaptation law:
u
Applications in telecommunication channel estimation and in noise
cancellation etc.
EL2620
39
2012
37
2012
Lecture 5
Let G(s)
EL2620
Lecture 5
u
G(s)
G(s)
ym
−
y
−
c θ
s
=
0
0
0
0.5
1
1.5
-2
0
2
5
5
10
10
θ
y , ym
15
15
20
20
1
, c = 1, u = sin t, and θ(0) = 0.
s+1
Simulation of Gain Adaptation
θ(t)
θ∗
Gain Adaptation—Closed-Loop System
replacements
EL2620
40
2012
38
2012
∀x, u
∀x 6= 0
: R → R such that
n
Lyapunov vs. Passivity
Lecture 5
V̇ ≤ uT y
Passivity idea: “Increase in stored energy ≤ Added energy”
V̇ ≤ 0
Lyapunov idea: “Energy is decreasing”
Storage function is a generalization of Lyapunov function
EL2620
Lecture 5
absorbed energy
43
2012
41
2012
• V (T ) represents the stored energy in the system
Z T
y(t)u(t)dt + V (x(0)) , ∀T > 0
•
V (x(T )) ≤
| {z }
| {z }
0
|
{z
} stored energy at t = 0
stored energy at t = T
Remark:
• V̇ (x) ≤ uT y ,
• V (0) = 0 and V (x) ≥ 0,
A storage function is a C function V
1
ẋ = f (x, u),
y = h(x)
Storage Function
Consider the nonlinear control system
EL2620
Storage Function and Passivity
hy, uiT =
Z
0
T
Example—KYP Lemma
ẋ = Ax + Bu,
y = Cx
BT P = C
Lecture 5
and hence the system is strictly passive. This fact is part of the
Kalman-Yakubovich-Popov lemma.
= −0.5xT Qx + uy < uy, x 6= 0
42
44
2012
V̇ = 0.5(ẋT P x + xT P ẋ) = 0.5xT (AT P + P A)x + uB T P x
= 0.5xT P x. Then
AT P + P A = −Q,
Assume there exists positive definite matrices P, Q such that
Consider V
2012
y(t)u(t)dt ≥ V (x(T ))−V (x(0)) = V (x(T )) ≥ 0
> 0,
Consider an asymptotically stable linear system
EL2620
Lecture 5
y = h(x)
= 0, then the system is passive.
Proof: For all T
with x(0)
ẋ = f (x, u),
Lemma: If there exists a storage function V for a system
EL2620
Today’s Goal
Lecture 5
– Passivity
– Circle Criterion
– Small Gain Theorem
• analyze stability using
• derive the gain of a system
You should be able to
EL2620
45
2012
−
u
G(s)
Lecture 6
-2
-1
0
0
1
2
5
u
10
y
4
and u = sat e give a stable oscillation.
s(s + 1)2
e
y
Motivating Example
• Describing function analysis
Lecture 6
EL2620 Nonlinear Control
• How can the oscillation be predicted?
G(s) =
r
EL2620
Lecture 6
EL2620
15
20
3
2012
1
2012
Today’s Goal
A Frequency Response Approach
Lecture 6
But, can we talk about the frequency response, in terms of gain and
phase lag, of a static nonlinearity?
4
2012
A (linear) feedback system will have sustained oscillations
(center) if the loop-gain is 1 at the frequency where the phase lag
is −180o
Nyquist / Bode:
EL2620
Lecture 6
function analysis
2
2012
• Analyze existence and stability of periodic solutions by describing
• Derive describing functions for static nonlinearities
You should be able to
EL2620
r
n=1
p
N.L.
u
G(s)
y
a2n + b2n sin[nωt + arctan(an /bn )]
−
e
Key Idea
→ φ/ω
Lecture 6
That is, we assume all higher harmonics are filtered out by G
5
7
2012
≪ |G(iω)| for n ≥ 2, then n = 1 suffices, so that
q
y(t) ≈ |G(iω)| a21 + b21 sin[ωt + arctan(a1 /b1 ) + arg G(iω)]
If |G(inω)|
u(t) =
∞
X
e(t) = A sin ωt gives
EL2620
Lecture 6
Note: Sometimes we make the change of variable t
= 2π/T and
Z
Z
2 T
2 T
u(t) cos nωt dt, bn (ω) =
u(t) sin nωt dt
an (ω) =
T 0
T 0
where ω
a0
(an cos nωt + bn sin nωt)
+
2
n=1
∞
2012
= u(t + T ) has a Fourier series expansion
Fourier Series
a0 X p 2
+
=
an + b2n sin[nωt + arctan(an /bn )]
2
n=1
u(t) =
∞
X
A periodic function u(t)
EL2620
Z
0
N.L.
2
u(t) − u
bk (t) dt
Lecture 6
Amplitude dependent gain and phase shift!
can be used instead of u(t) to analyze the system.
N (A, ω)
u
b1 (t)
u
b1 (t) = |N (A, ω)|A sin[ωt + arg N (A, ω)]
e(t)
b1 (ω) + ia1 (ω)
A
= 0, then
u(t)
N (A, ω) =
If G is low pass and a0
e(t)
T
Definition of Describing Function
2
min
û T
The describing function is
EL2620
Lecture 6
solves
a0 X
(an cos nωt + bn sin nωt)
u
bk (t) =
+
2
n=1
k
The Fourier Coefficients are Optimal
The finite expansion
EL2620
8
2012
6
2012
Z
e
-1
-0.5
0
0
0.5
1
−
3
4
5
6
a1 =
e
4H
b1 (ω) + ia1 (ω)
=
A
πA
f (·)
u
G(s)
y
A
−1/N (A)
G(iω)
Existence of Periodic Solutions
N (A) =
⇔
G(iω) = −1/N (A)
Lecture 6
The intersections of the curves G(iω) and −1/N (A)
give ω and A for a possible periodic solution.
G(iω)N (A) = −1
11
2012
9
2012
Proposal: sustained oscillations if loop-gain 1 and phase-lag −180o
0
replacements
EL2620
Lecture 6
2
u
φ = 2πt/T
1
e
1 2π
u(φ) cos φ dφ = 0
π 0
Z
Z
1 2π
2 π
4H
b1 =
u(φ) sin φ dφ =
H sin φ dφ =
π 0
π 0
π
−H
H
u
Describing Function for a Relay
The describing function for a relay is thus
EL2620
Odd Static Nonlinearities
−
e
G(s) =
u
G(s)
y
-1
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
-0.8
-0.6
-0.4
-0.2
G(iω)
−1/N (A)
0
Periodic Solutions in Relay System
2012
10
2012
Lecture 6
12
3
with feedback u = −sgn y
(s + 1)3
√
No phase lag in f (·), arg G(iω) = −π for ω = 3 = 1.7
√
G(i 3) = −3/8 = −1/N (A) = −πA/4 ⇒ A = 12/8π ≈ 0.48
r
EL2620
Lecture 6
• Nf +g (A) = Nf (A) + Ng (A)
• Nαf (A) = αNf (A)
• Nf (A, ω) = Nf (A)
• Im Nf (A, ω) = 0
Assume f (·) and g(·) are odd (i.e. f (−e) = −f (e)) static
nonlinearities with describing functions Nf and Ng . Then,
EL2620
0
2
y
4
u
6
8
10
Lecture 6
EL2620
Lecture 6
=
=
b1 =
a1 =
Z
1 2π
u(φ) cos φ dφ = 0
π 0
Z
Z
1 2π
4 π/2
u(φ) sin φ dφ =
u(φ) sin φ dφ
π 0
π 0
Z
Z
4A φ0 2
4D π/2
sin φ dφ +
sin φ dφ
π 0
π φ0
A
2φ0 + sin 2φ0
π
Note that G filters out almost all higher-order harmonics.
-1
-0.5
0
0.5
1
The prediction via the describing function agrees very well with the
true oscillations:
EL2620
15
2012
13
2012
D
e
-1
-0.5
0
0
0.5
1
1
2
u
3
φ
e
4
5
6
= arcsin D/A.
Lecture 6
0.1
0
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
2
4
6
8
N (A) for H = D = 1
10
1
Hence, if H = D , then N (A) =
2φ0 + sin 2φ0 .
π
If H 6= D , then the rule Nαf (A) = αNf (A) gives
H
2φ0 + sin 2φ0
N (A) =
Dπ
EL2620
Lecture 6
where φ0
= A sin ωt = A sin φ. First set H = D. Then for
φ ∈ (0, π)
A sin φ,
φ ∈ (0, φ0 ) ∪ (π − φ0 , π)
u(φ) =
D,
φ ∈ (φ0 , π − φ0 )
−H
−D
H
u
Describing Function for a Saturation
Let e(t)
EL2620
16
2012
14
2012
Ω
Lecture 6
oscillation amplitude is decreasing.
• If G(Ω) does not encircle the point −1/N (A), then the
amplitude is increasing.
• If G(Ω) encircles the point −1/N (A), then the oscillation
−1/N (A)
G(Ω)
Stability of Periodic Solutions
Assume that G(s) is stable.
EL2620
Lecture 6
19
2012
17
2012
5 minute exercise: What oscillation amplitude and frequency do the
describing function analysis predict for the “Motivating Example”?
EL2620
−
KG(s)
−1/K
G(iω)
The Nyquist Theorem
2012
−1/N (A)
G(Ω)
An Unstable Periodic Solution
Lecture 6
An intersection with amplitude A0 is unstable if A < A0 leads to
decreasing amplitude and A > A0 leads to increasing.
EL2620
Lecture 6
system is stable (damped oscillations)
20
2012
18
• If G(iω) does not encircle the point −1/K , then the closed-loop
is unstable (growing amplitude oscillations).
• If G(iω) encircles the point −1/K , then the closed-loop system
displays sustained oscillations
• If G(iω) goes through the point −1/K the closed-loop system
Assume that G is stable, and K is a positive gain.
EL2620
−
u
G(s)
y
-0.2
-5
-0.15
-0.1
-0.05
0
0.05
0.1
-4
-3
-2
-1
−1/N (A)
G(iω)
(s + 10)2
with feedback u = −sgn y
G(s) =
(s + 1)3
e
1
u
D
e
1
-1
-0.8
-0.6
-0.4
-0.2
0
0
0.2
0.4
0.6
0.8
1
2
u
3
e
4
= arcsin D/A.
= A sin ωt = A sin φ. Then for φ ∈ (0, π)
0,
φ ∈ (0, φ0 )
u(φ) =
1,
φ ∈ (φ0 , π − φ0 )
where φ0
Lecture 6
0
5
6
Describing Function for a Quantizer
Let e(t)
EL2620
Lecture 6
gives one stable and one unstable limit cycle. The left most
intersection corresponds to the stable one.
r
0.15
0.2
Stable Periodic Solution in Relay System
EL2620
23
2012
21
2012
Automatic Tuning of PID Controller
Lecture 6
EL2620
Lecture 6
0
y
u
Relay
5
A
T
u
−1
Process
10
y
Time
Z
1 2π
u(φ) cos φ dφ = 0
a1 =
π 0
Z
Z
1 2π
4 π/2
b1 =
u(φ) sin φdφ =
sin φdφ
π 0
π φ0
4
4p
= cos φ0 =
1 − D2 /A2
π
π
(
0,
A<D
N (A) =
4 p
1 − D2 /A2 , A ≥ D
πA
−1
0
1
Σ
PID
Period and amplitude of relay feedback limit cycle can be used for
autotuning:
EL2620
24
2012
22
2012
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
6
8
10
_
C
= sgn y :
_
F
G
Friction
y
Lecture 6
The oscillation depends on the zero at s
= z.
s(s − z)
G
= 3
with feedback u = −sgn y
1 + GC
s + 2s2 + 2s + 1
Corresponds to
yref
Control loop with friction F
27
2012
25
2012
Accuracy of Describing Function Analysis
EL2620
Lecture 6
4
≈ 1.3/A for large amplitudes
2
N (A) for D = 1
Plot of Describing Function Quantizer
Notice that N (A)
EL2620
Describing Function Pitfalls
0
15
15
20
20
25
25
30
30
0
0.2
0.4
0.6
0.8
Lecture 6
0.5
Accurate results only if y is close to sinusoidal!
(17.3, 0.23), respectively.
0
= (11.4, 1.00) and
z = 1/3
-0.5
z = 4/3
DF gives period times and amplitudes (T, A)
0
-0.4
-1
10
z = 4/3
10
1
1.2
-0.4
5
5
z = 1/3
-0.2
y
y
-0.2
0
0.2
0.4
-1
0
1
EL2620
Lecture 6
and can be far from the true values.
28
2012
26
2012
• The predicted amplitude and frequency are only approximations
• A limit cycle may exist even if the DF does not predict it.
• A DF may predict a limit cycle even if one does not exist.
Describing function analysis can give erroneous results.
EL2620
=x ?
2
Analysis of Oscillations—A Summary
Lecture 6
• Powerful graphical methods
• Approximate results
• Describing function analysis
Frequency-domain:
• Hard to use for large problems
• Rigorous results but only for simple examples
• Poincaré maps and Lyapunov functions
Time-domain:
EL2620
Lecture 6
2 minute exercise: What is N (A) for f (x)
EL2620
31
2012
29
2012
e(t) = A sin ωt
f (·)
u(t)
Harmonic Balance
2012
30
2012
Lecture 6
function analysis
32
• Analyze existence and stability of periodic solutions by describing
Today’s Goal
• Derive describing functions for static nonlinearities
You should be able to
EL2620
Lecture 6
Example: f (x) = x2 gives u(t) = (1 − cos 2ωt)/2. Hence by
considering a0 = 1 and a2 = 1/2 we get the exact result.
may give much better result. Describing function corresponds to
k = 1 and a0 = 0.
k
a0 X
u
bk (t) =
(an cos nωt + bn sin nωt)
+
2
n=1
A few more Fourier coefficients in the truncation
EL2620
PSfr
r
-
e
PID
v
u
G
y
The Problem with Saturating Actuator
• Compensation for friction
• Friction models
• Compensation for saturation (anti-windup)
Lecture 7
EL2620 Nonlinear Control
Lecture 7
1
+ Td s
Recall: CPID (s) = K 1 +
Ti s
– Example: I-part in PID
3
2012
1
2012
• Leads to problem when system and/or the controller are unstable
behavior!
• The feedback path is broken when u saturates ⇒ Open loop
EL2620
Lecture 7
EL2620
Today’s Goal
−0.1
0
0.1
0
1
2
0
0
20
20
40
Time
40
60
60
80
80
Example—Windup in PID Controller
Lecture 7
PID controller without (dashed) and with (solid) anti-windup
EL2620
Lecture 7
• Compensation for friction
• Anti-windup for PID and state-space controllers
You should be able to analyze and design
EL2620
y
u
4
2012
2
2012
Anti-Windup for PID Controller
y
K
Ti
∑
K
KTd s
∑
K
KTd s
1
Tt
1
s
1
Tt
1
s
∑
v
v
−
−
∑
es
+
u
∑
u
Actuator
es
+
Actuator
Actuator model
∑
− xˆ
Σ
L
State feedback
Actuator
x̂˙ = Ax̂ + B sat v + K(y − C x̂)
v = L(xm − x̂)
Observer
xm
Lecture 7
sat v
Anti-Windup for Observer-Based
State Feedback Controller
−y
K
Ti
−y
x̂ is estimate of process state, xm desired (model) state
Need actuator model if sat v is not measurable
EL2620
Lecture 7
e
(b)
e
(a)
Anti-windup (a) with actuator output available and (b) without
EL2620
7
2012
5
2012
Anti-Windup is Based on Tracking
Z
Ke(τ ) es (τ )
1 t
es (τ ) dτ
+
dτ ≈
Ti
Tt
Tt 0
2012
Lecture 7
(a)
y
SA
u
(b)
y
SB
u
6
8
2012
Idea: Rewrite representation of control law from (a) to (b) with the
same input–output relation, but where the unstable SA is replaced by
a stable SB . If u saturates, then (b) behaves better than (a).
Windup possible if F unstable and u saturates
ẋc = F xc + Gy
u = Cxc + Dy
Anti-Windup for
General State-Space Controller
0
State-space controller:
EL2620
Lecture 7
I(t) =
Z t
Remark: If 0 < Tt ≪ Ti , then the integrator state becomes sensitive
to the instances when es 6= 0:
• Tt = Ti
√
• Tt = Ti Td
Common choices of Tt :
The tracking time Tt is the design parameter of the anti-windup
When the control signal saturates, the integration state in the
controller tracks the proper state
EL2620
Controllers with ”Stable” Zeros
= G − KD.
⇒
2012
9
2012
Lecture 7
Note that this implies G − KD = 0 in the figure on the previous
slide, and we thus obtain P-feedback with gain D under saturation.
11
and the eigenvalues of the ”observer” based controller becomes equal
to the zeros of F (s) when u saturates
K = G/D
F − KC = F − GC/D
⇒u=0
Thus, choose ”observer” gain
ẋc = F xc + Gy
u = Cxc + Dy
z
}|
{
ẋc = (F − GC/D) xc
y = −C/Dxc
zero dynamics
Most controllers are minimum phase, i.e. have zeros strictly in LHP
EL2620
Lecture 7
where G0
ẋc = F0 xc + G0 y + Ku
u = sat(Cxc + Dy)
Choose K such that F0 = F − KC has desired (stable)
eigenvalues. Then use controller
ẋc = F xc + Gy + K(u − Cxc − Dy)
= (F − KC)xc + (G − KD)y + Ku
Mimic the observer-based controller:
EL2620
∑
∑
K
s −1
F − KC
s −1
xc
xc
C
D
C
∑
∑
v
u
sat
u
y
+
−
Σ
1 − 1
D
F (s)
D
sat
u
Lecture 7
If the transfer function (1/F (s) − 1/D) in the feedback loop is
stable (stable zeros) ⇒ No stability problems in case of saturation
10
12
2012
= lims→∞ F (s) and consider the feedback implementation
Controller F (s) with ”Stable” Zeros
G − KD
G
2012
It is easy to show that transfer function from y to u with no saturation
equals F (s)!
Let D
EL2620
Lecture 7
y
y
F
D
State-space controller without and with anti-windup:
EL2620
Internal Model Control (IMC)
−
Q(s)
u
b
G(s)
G(s)
−
yb
y
−
Lecture 7
Choose Q
r
≈ G−1 .
u
v
b
G
G
−
y
IMC with Static Nonlinearity
Q
− yb.
≈ G and choose Q stable with Q ≈ G−1 .
Include nonlinearity in model
EL2620
Lecture 7
b
Design: assume G
Assume G stable. Note: feedback from the model error y
r
C(s)
IMC: apply feedback only when system G and model Ĝ differ!
EL2620
15
2012
13
2012
Lecture 7
T1 s + 1
umax
y±
τs + 1
τs + 1
An alternative way to implement anti-windup!
No integration.
u=−
> umax (v = ±umax ):
If |u|
−(T1 s + 1)
y
τs
1
T1 s + 1
y+
v
τs + 1
τs + 1
< umax (v = u): PI controller u =
b =−
u = −Q(y − Gv)
= 0 and abuse of Laplace transform notation
Example (cont’d)
if |u|
Assume r
EL2620
Lecture 7
PI-controller!
⇒
b
1 − QG
T1
1
T1 s + 1
=
1+
F =
τs
τ
T1 s
Q
T1 s + 1
, τ < T1
τs + 1
1
T1 s + 1
Example
b
G(s)
=
Q=
F =
Gives the controller
Choose
EL2620
16
2012
14
2012
Other Anti-Windup Solutions
Friction
Lecture 7
• How to detect friction in control loops?
• How to compensate for friction?
• How to model friction?
Problems:
– Earthquakes
• Sometimes too small:
– Friction in brakes
• Sometimes good:
– Friction in valves and other actuators
• Often bad:
Friction is present almost everywhere
EL2620
Lecture 7
• Apply optimal control theory (Lecture 12)
• Conditionally reset integration state to zero
• Don’t update controller states at saturation
• Tune controller to avoid saturation
Other solutions include:
Solutions above are all based on tracking.
EL2620
19
2012
17
2012
Bumpless Transfer
1
Tr
1
Tm
∑
uc
y
∑
1
Tr
1
s
PD
1
s
∑
A
−
∑
M
+
−
∑
+
u
Lecture 7
Ff
EL2620
Lecture 7
y
Fp
x
0
0
1
0
0
0.2
0.4
0
0
1
2
10
Ff
Fp
10
10
Stick-Slip Motion
Note the incremental form of the manual control mode (u̇
e
uc
1
Tr
PID with anti-windup and bumpless transfer:
20
20
20
ẏ
20
Time
Time
Time
y
x
2012
18
2012
≈ uc /Tm )
Another application of the tracking idea is in the switching between
automatic and manual control modes.
EL2620
Lecture 7
EL2620
xr
1
0
0
−1
0
1
−0.2
0.2
0
0
0.5
u
20
20
20
Σ
40
40
40
1
ms
60
60
60
Friction
v
80
80
80
1
Friction Modeling
PID
−
F
s
100
Time
100
Time
100
Time
x
Position Control of Servo with Friction
Lecture 7
EL2620
23
2012
21
2012
Stribeck Effect
Lecture 7
-0.04
-200
-150
-100
-50
0
50
100
150
200
Stribeck (1902)
-0.03
-0.02
-0.01
0
0.01
Velocity [rad/sec]
Steady state friction: Joint 1
0.02
0.03
0.04
Friction increases with decreasing velocity (for low velocities)
EL2620
Lecture 7
5 minute exercise: Which are the signals in the previous plots?
EL2620
Friction [Nm]
24
2012
22
2012
Classical Friction Models
Integral Action
Lecture 7
xr
_
PID
u
_
F
1
ms
Friction
v
1
s
• If friction force changes quickly, then large integral action
(small Ti ) necessary. May lead to stability problem
• Works if friction force changes slowly (v(t) ≈ const)
• Integral action compensates for any external disturbance
EL2620
Lecture 7
Advanced models capture various friction phenomena better
EL2620
x
27
2012
25
2012
Friction Compensation
K
Ti
η
e(t)
ê(τ )dτ where
ê(t)
Rt
Lecture 7
Disadvantage: Error won’t go to zero
Advantage: Avoid that small static error introduces oscillation
=
Modified Integral Action
Modify the integral part to I
EL2620
Lecture 7
• The Knocker
• Adaptive friction compensation
• Model-based friction compensation
• Dither signal
• Integral action
• Lubrication
EL2620
28
2012
26
2012
Dither Signal
_
PID
u
F
F̂
+
+
Friction
estimator
1
ms
Friction
1
ms
F̂ = â sgn v
v
ż = kuPID sgn v
â = z − km|v|
F = a sgn v
uPID −
Friction estimator:
Lecture 7
_
Friction
v
1
s
x
1
s
x
Adaptive Friction Compensation
Coulomb friction model:
EL2620
Lecture 7
Cf., mechanical maze puzzle
(labyrintspel)
xr
F
Avoid sticking at v = 0 (where there is high friction)
by adding high-frequency mechanical vibration (dither )
replacements
EL2620
31
2012
29
2012
u = uPID + F̂
Lecture 7
= 0.
= −k sgn v(F − F̂ )
= −k(a − â)
= −ke
dâ
dz
d
de
=−
= − + km |v|
dt
dt
dt
dt
= −kuPID sgn v + kmv̇ sgn v
= −k sgn v(uPID − mv̇)
e = a − â → 0 as t → ∞
d
Remark: Careful with dt
|v| at v
Proof:
Adaptation converges:
EL2620
Lecture 7
• u and F does apply at the same point
• An estimate F̂ ≈ F is available
Possible if:
where uPID is the regular control signal and F̂ an estimate of F .
use control signal
mẍ = u − F
Model-Based Friction Compensation
For process with friction F :
EL2620
32
2012
30
2012
0
0
2
2
3
u
3
6
6
5
5
7
7
8
8
0
-10
-5
0
0
5
10
-1
-0.5
0
1
1
2
2
3
3
4
4
0
-10
-5
0
0
5
10
-1
-0.5
0
0.5
1
1
1
2
2
3
3
4
4
5
5
6
6
7
7
8
8
5
5
PI-controller
P-controller with adaptive friction compensation
4
4
1
0.5
6
6
7
7
8
8
Detection of Friction in Control Loops
1
1
v
P-controller
vref
50
50
100
100
150
150
200
time
200
Lecture 7
Horch: PhD thesis (2000) and patent
10.5
11
11.5
12
98
99
100
101
102
250
250
300
300
350
350
• Idea: Monitor loops automatically and estimate friction
• Q: When should valves be maintained?
• Friction is due to wear and increases with time
EL2620
Lecture 7
-10
-5
0
5
10
-1
-0.5
0
0.5
1
EL2620
y
u
35
2012
33
2012
The Knocker
0
1
2
3
4
5
6
7
8
Today’s Goal
Lecture 7
• Compensation for friction
9
10
• Anti-windup for PID, state-space, and polynomial controllers
You should be able to analyze and design
EL2620
Lecture 7
Hägglund: Patent and Innovation Cup winner
-0.5
0
0.5
1
1.5
2
2.5
Coulomb friction compensation and square wave dither
Typical control signal u
EL2620
36
2012
34
2012
Lecture 7
• Quantization
• Backlash
EL2620
Next Lecture
37
2012
Lecture 8
Fin✲
EL2620
Lecture 8
EL2620
Tin
θin
✛ D✲
xin
2D
✛ D✲
xout
θout
Linear and Angular Backlash
• Backlash
• Quantization
Lecture 8
EL2620 Nonlinear Control
Tout
Fout
✲
3
2012
1
2012
Today’s Goal
Lecture 8
• sometimes inducing oscillations
• bad for control performance
• necessary for a gearbox to work in high temperature
• increasing with wear
Backlash
• present in most mechanical and hydraulic systems
Backlash (glapp) is
EL2620
Lecture 8
• Quantization
• Backlash
You should be able to analyze and design for
EL2620
4
2012
2
2012
otherwise
in contact
D
xout
θout
D
xin
θin
− xin | = D and ẋin (xin − xout ) > 0.
ẋout
ẋin ,
=
0,
Backlash Model
Effects of Backlash
Lecture 8
θref
−
e
K
u
1
1 + sT
θ̇in
1
s
θin
P-control of motor angle with gearbox having backlash with D
EL2620
Lecture 8
backlash is a dynamic phenomena.
θout
7
2012
5
2012
= 0.2
• Multivalued output; current output depends on history. Thus,
“in contact” if |xout
EL2620
5
10
15
20
No backlash K
25
30
35
= 0.25, 1, 4
40
0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
5
10
15
Backlash K
20
25
30
35
= 0.25, 1, 4
xin − xout
(θin − θout )
40
Lecture 8
but never vanish
• Note that the amplitude will decrease with decreasing D,
8
2012
6
2012
• Oscillations for K = 4 but not for K = 0.25 or K = 1. Why?
0
0
0.5
1
1.5
EL2620
Lecture 8
D
(Torque)
Force
Alternative Model
Not equivalent to “Backlash Model”
EL2620
Describing Function for Backlash
K=4
-2
Real Axis
0
K = 0.25
K=1
−1/N (A)
Nyquist Diagrams
4
0
0
5
10
15
Lecture 8
Describing function predicts oscillation well
= 0.33, ω = 1.24
Simulation: A = 0.33, ω = 2π/5.0 = 1.26
DF analysis: Intersection at A
2
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
20
25
30
35
Input and output of backlash
Describing Function Analysis
K = 4, D = 0.2:
-4
-4
-3
-2
-1
0
1
2
3
4
EL2620
Re N (A) =
1 π
+ arcsin(1 − 2D/A)
π 2
s D
D
+ 2(1 − 2D/A)
1−
A
A
D
4D
1−
Im N (A) = −
πA
A
< D then N (A) = 0 else
Lecture 8
If A
EL2620
Imaginary Axis
40
11
2012
9
2012
-3.5
-2.5
-2
-1.5
-1
Re -1/N(A)
-0.5
0
0.5
1
Stability Proof for Backlash System
−
e
K
u
1
1 + sT
θ̇in
1
s
Lecture 8
Q: What about θ̇in and θ̇out ?
Note that θin and θout will not converge to zero
θref
θin
Do there exist conditions that guarantee stability (of the
steady-state)?
θout
12
2012
10
2012
→ −1 as A → ∞ (physical interpretation?)
-3
The describing function method is only approximate.
replacements
EL2620
Lecture 8
Note that −1/N (A)
-16
-4
-14
-12
-10
-8
-6
-4
-2
0
−1/N (A) for D = 0.2:
EL2620
Im -1/N(A)
G(s)
Lecture 8
• Backlash inverse
• Linear controller design
• Deadzone
G(s)
BL
θ̇in in contact
0 otherwise
θ̇in
θ̇out
Backlash Compensation
• Mechanical solutions
EL2620
Lecture 8
θout
θ̇out =
The block “BL” satisfies
θin
Rewrite the system:
EL2620
15
2012
13
2012
Homework 2
G(s)
BL
Lecture 8
θref
−
e
1+sT2
K 1+sT
1
u
1
1 + sT
θ̇in
1
s
θin
Linear Controller Design
Introduce phase lead compensation:
EL2620
Lecture 8
Circle Criterion BL contained in sector [0, 1]
=1
θ̇out
Small Gain Theorem BL has gain γ(BL)
Passivity Theorem BL is passive
θ̇in
Analyze this backlash system with input–output stability results:
EL2620
θout
16
2012
14
2012
-5
without filter
Real Axis
0
Nyquist Diagrams
10
-2
-1
0
0
1
5
u, with/without filter
5
y, with/without filter
10
10
15
15
20
20

b if u(t) > u(t−)

 u+D
b if u(t) < u(t−)
xin (t) =
u−D

 x (t−) otherwise
in
Oscillation removed!
5
with filter
2
0
0
0.5
1
1.5
Lecture 8
b > D then over-compensation (may give oscillation)
• If D
b < D then under-compensation (decreased backlash)
• If D
b = D then perfect compensation (xout = u)
• If D
EL2620
Lecture 8
-10
-10
-8
-6
-4
-2
0
2
4
6
8
10
1+sT2
F (s) = K 1+sT
with T1 = 0.5, T2 = 2.0:
1
EL2620
Imaginary Axis
19
2012
17
2012
Backlash Inverse
✲
✻
✲
✲ xout
D
D
Example—Perfect Compensation
✻
✲ xin
✲
Lecture 8
-2
0
2
4
6
0
0.4
0.8
1.2
0
0
2
2
4
4
6
6
8
8
10
10
Motor with backlash on input in feedback with PD-controller
EL2620
Lecture 8
u
Idea: Let xin jump ±2D when ẋout should change sign
xout
EL2620
xin
20
2012
18
2012
40
60
80
21
Lecture 8
2012
EL2620
-2
0
1
2
2
3
3
4
4
5
5
2012
22
u
G
y
A/D
Lecture 8
• Roundoff, overflow, underflow in computations
• Quantization of parameters
• Quantization in A/D and D/A converters
D/A
∆
∆/2
• What precision is needed in computations? (8–64 bits?)
23
y
e fe de =
2
Z
u
−∆/2
∆/2
+
e
y
∆2
e2
de =
∆
12
Lecture 8
24
• But, added noise can never affect stability while quantization can!
Q
−∞
∞
• May be reasonable if ∆ is small compared to the variations in u
u
Var(e) =
Z
Model quantization error as a uniformly distributed stochastic signal e
independent of u with
20
1
• What precision is needed in A/D and D/A converters? (8–14 bits?)
0
0
2
4
6
0
2012
Linear Model of Quantization
0
2
4
80
0
60
0
40
0.4
0.4
20
0.8
0.8
Example—Over-Compensation
1.2
0
EL2620
1.2
Example—Under-Compensation
2012
Quantization
EL2620
Lecture 8
EL2620
0
1
0
(
-1
-0.8
-0.6
-0.4
-0.2
0
0
0.2
0.4
0.6
1
2
u
3
e
4
5
2
A/Delta
4
0,
A<D
4 p
1 − D2 /A2 , A > D
πA
e
1
0.8
6
2012
25
2012
Lecture 8
• Reducing ∆ reduces only the oscillation amplitude
• Controller with gain margin > 1/0.79 = 1.27 avoids oscillation
27
• Predicts oscillation if Nyquist curve intersects negative real axis to
the left of −π/4 ≈ −0.79
• The maximum value is 4/π ≈ 1.27 attained at A ≈ 0.71∆.
EL2620
Lecture 8
D
N (A) =
Lecture 6 ⇒
1
u
Describing Function for Deadzone Relay
EL2620
N (A)
u
Q
y
∆
∆/2
y
Describing Function for Quantizer
u
Lecture 8
r
−
K
u
G(s)
y
Q
> 2/1.27 = 1.57 with Q.
1−e−s
s
Stable oscillation predicted for K
Nyquist of linear part (K & ZOH & G(s)) intersects at −0.5K :
Stability for K < 2 without Q.
= 0.2
Example—Motor with P-controller.
Quantization of process output with ∆
EL2620
Lecture 8

A < ∆2

 0,
s
2
n
N (A) =
2i − 1
4∆ P

1−
∆<A<
∆ , 2n−1

2
πA i=1
2A
EL2620
28
2012
26
2n+1
∆
2
2012
(b)
(c)
Quantization ∆
EL2620
Lecture 8
(a)
0
0
0
Time
K = 1.6
K = 1.2
K = 0.8
31
−0.05
0
0.05
−0.05
0
0.05
0
1
0
0
0
50
50
50
Time
100
100
100
Lecture 8
150
−2
D/A
Real axis
= 0.01 in D/A converter:
F
−4
Lecture 8
100
150
150
Quantization ∆
EL2620
Lecture 8
−6
Au = 0.005 and T = 39
Simulation: Au = 0.005 and T = 39
Time
100
100
2012
29
A/D
−2
0
Describing function:
50
50
50
50
50
50
2
0
G
150
150
150
Example—1/s2 & 2nd-Order Controller
EL2620
Ay = 0.01 and T = 39
Simulation: Ay = 0.01 and T = 28
0
0
0
Describing function:
−0.05
0
0.05
0.98
1.02
0
1
= 0.02 in A/D converter:
0
1
0
1
0
1
2012
Imaginary axis
EL2620
Output
Output
Output
Output
Output
Input
Output
Unquantized
Input
32
2012
30
2012
Quantization Compensation
Lecture 8
pling, and digital lowpass filter to improve accuracy of A/D
converter
• Use analog dither, oversam-
anti-windup to improve D/A
converter
• Use the tracking idea from
+
A/D
controller
Digital
filter
• Avoid unstable controller and gain margins < 1.3
• Improve accuracy (larger word length)
EL2620
Analog
decim.
+−
D/A
33
2012
Today’s Goal
Lecture 8
• Quantization
• Backlash
You should now be able to analyze and design for
EL2620
34
2012
k
f (·)
Feedback amplifiers were the solution!
f (·)
−
Lecture 9
Black, Bode, and Nyquist at Bell Labs 1920–1950.
f (·)
r
High signal amplification with low distortion was needed.
y
History of the Feedback Amplifier
• Nonlinear control design based on high-gain control
Lecture 9
EL2620 Nonlinear Control
New York–San Francisco communication link 1914.
EL2620
Lecture 9
EL2620
3
2012
1
2012
Today’s Goal
α2 e f (e)
Lecture 9
K
f (·)
y
⇒
≫ 1/α1 , yields
f (e)
≤ α2
e
−
choose K
α1 ≤
r
y≈
1
r
K
α1
α2
r ≤ y ≤
r
1 + α1 K
1 + α2 K
e
α1 e
4
2012
2
2012
Linearization Through High Gain Feedback
EL2620
Lecture 9
• Sliding mode controllers
• High-gain control systems
You should be able to analyze and design
EL2620
A Word of Caution
−
f (·)
k
s
u̇ = k v − f (u)
u
Lecture 9
> 0 large and df /du > 0, then u̇ → 0 and
0 = k v − f (u)
⇔
f (u) = v
⇔
If k
v
f (u)
u = f −1 (v)
u
7
2012
Remark: How to Obtain f −1 from f using
Feedback
EL2620
Lecture 9
5
2012
Nyquist: high loop-gain may induce oscillations (due to dynamics)!
EL2620
Inverting Nonlinearities
fˆ−1 (·)
f (·)
G(s)
2012
6
2012
−
e
K
u
Lecture 9
The case K
0
0
0.2
0.4
0.6
0.8
0.2
= u2 through feedback.
f (·)
y
0.4
0.6
y(r)
= 100 is shown in the plot: y(r) ≈ r.
Linearization of f (u)
r
1
0.8
f (u)
1
8
Example—Linearization of Static Nonlinearity
EL2620
Lecture 9
Should be combined with feedback as in the figure!
−
F (s)
Controller
Compensation of static nonlinearity through inversion:
EL2620
⇒
F
G
= 0.1
K
G
Lecture 9
Gtot = (Gcl )100 ≈ 10100
dGtot
= (1 + 10−3 )100 − 1 ≈ 0.1
Gtot
and total distortion
y
S = (1 + GK)−1 ≈ 0.01. Then
−
dG
dGcl
=S
≈ 0.001
Gcl
G
⇒
r
100 feedback amplifiers in series give total amplification
Choose K
= 0.1
y
9
11
2012
1
dG
dG
dGcl
=
=S
Gcl
1 + GF G
G
−
Example—Distortion Reduction
Let G = 1000,
distortion dG/G
EL2620
Lecture 9
2012
= (1 + GF )−1
S is the closed-loop sensitivity to open-loop perturbations.
dGcl
1
=
dG
(1 + GF )2
Small perturbations dG in G gives
G
Gcl =
1 + GF
The closed-loop system is
r
The Sensitivity Function S
EL2620
K
f (·)
−
K
2012
10
2012
Lecture 9
1–4
16
480
1938
1941
1
1914
1923
Channels
Year
30000
1000
150–400
60
Loss (dB)
600
40
6–20
3–6
No amp’s
The feedback amplifier was patented by Black 1937.
12
Transcontinental Communication Revolution
EL2620
Lecture 9
−
Several links give distortion-free high gain.
f (·)
Distortion Reduction via Feedback
The feedback reduces distortion in each link.
EL2620
−1
r
GF (iω)
r
−
y
|S(iω)| ≤ r−1
f (·)
GF
Sensitivity and the Circle Criterion
⇔
On–Off Control
−
e
u
G(s)
y
Lecture 9
The relay corresponds to infinitely high gain at the switching point.
r
Common in temperature control, level control etc.
15
2012
13
2012
1
f (y)
1
≤
≤
1+r
y
1−r
On–off control is the simplest control strategy.
EL2620
Lecture 9
Then, the Circle Criterion gives stability if
|1 + GF (iω)| > r
Consider a circle C
:= {z ∈ C : |z + 1| = r}, r ∈ (0, 1).
GF (iω) stays outside C if
EL2620
y
k1 y
ẋ = Ax + Bu,
u ∈ [−1, 1]
= xT P x, P = P T > 0, represents the energy of
A Control Design Idea
2012
Lecture 9
ẋ = Ax + B
ẋ = Ax − B
B T QP x = 0
16
is minimized if u = − sgn(B T P x) (Notice that V̇ = a + bu, i.e. just
a segment of line in u, −1 < u < 1. Hence the lowest value is at an
endpoint, depending on the sign of the slope b. )
V̇ = xT (AT P + P A)x + 2B T P xu
Choose u such that V decays as fast as possible:
Assume V (x)
EL2620
Lecture 9
1
k1 =
1+r
1
k2 =
1−r
Hence, |S(iω)| small implies low sensitivity to nonlinearities.
k2 y f (y)
This corresponds to a large sector for f (·).
If |S(iω)| is small, we can choose r large (close to one).
14
2012
Small Sensitivity Allows Large Uncertainty
EL2620
σ(x) > 0
σ(x) < 0
f+
Sliding Modes
f−
σ(x) < 0
σ(x) > 0
Lecture 9
x1 = −1
This implies the following behavior
x1 = 1
x2 < 0
x2 > 0



ẋ2 (t) ≈ x1 + 1, dx2 ≈ 1 + x1
dx1

dx2


≈ 1 − x1
ẋ2 (t) ≈ x1 − 1,


dx1
For small x2 we have
EL2620
Lecture 9
f+
x2 = 0
x2 < 0
x2 > 0
αf + + (1 − α)f −
= {x : σ(x) = 0}.
f−
The sliding surface is S
The sliding mode is ẋ
= αf + + (1 − α)f − , where α satisfies
αfn+ + (1 − α)fn− = 0 for the normal projections of f + , f −
+
f (x),
ẋ =
f − (x),
EL2620
19
2012
17
2012
Ax − B,
Ax + B,
x2 > 0
x2 < 0
Lecture 9
ueq is called the equivalent control.
Sliding Mode Dynamics
ẋ =
The dynamics along the sliding surface S is obtained by setting
u = ueq ∈ [−1, 1] such that x(t) stays on S .
EL2620
Lecture 9
Example
0 −1
1
ẋ =
x+
u = Ax + Bu
1 −1
1
u = − sgn σ(x) = − sgn x2 = − sgn(Cx)
is equivalent to
EL2620
20
2012
18
2012
=0
ẋ = Ax + Bu
u = − sgn σ(x) = − sgn(Cx)
= −CAx/CB .
Lecture 9
because σ(x)
= x2 = 0. (Same result as before.)
ueq = −CAx/CB = − 1 −1 x = −x1 ,
Example (cont’d): For the example:
gives ueq
> 0. The sliding surface S = {x : Cx = 0} so
dσ
0 = σ̇(x) =
f (x) + g(x)u = C Ax + Bueq
dx
Assume CB
EL2620
= {x : x2 = 0}.
⇒ ueq = −x1
Equivalent Control for Linear System
gives the dynamics on the sliding surface S
=0
ẋ1 = − x2 + ueq = −x1
|{z}
Insert this in the equation for ẋ1 :
Lecture 9
2012
23
2012
21
= ueq such that σ̇(x) = ẋ2 = 0 on σ(x) = x2 = 0 gives
Example (cont’d)
0 = ẋ2 = x1 − x2 + ueq = x1 + ueq
|{z}
Finding u
EL2620
if the inverse exists.
dσ
f (x)
dx
Lecture 9
Remark: The condition that Cx = 0 corresponds to the zero at
s = 0, and thus this dynamic disappears on S = {x : Cx = 0}.
under the constraint Cx = 0, where the eigenvalues of
(I − BC/CB)A are equal to the zeros of
sG(s) = sC(sI − A)−1 B .
= {x : Cx = 0} is given by
1
BC Ax,
ẋ = Ax + Bueq = I −
CB
The dynamics on S
EL2620
−1
Sliding Dynamics
dσ
ueq = −
g(x)
dx
The equivalent control is thus given by
Lecture 9
2012
24
2012
22
= {x : σ(x) = 0}. Then, for x ∈ S ,
dσ dx
dσ
·
=
0 = σ̇(x) =
f (x) + g(x)u
dx dt
dx
ẋ = f (x) + g(x)u
u = − sgn σ(x)
Deriving the Equivalent Control
has a stable sliding surface S
Assume
EL2620
Proof
2012
= σ 2 (x)/2 with σ(x) = pT x. Then,
Closed-Loop Stability
= 0.
+ ··· +
pn−1 x(1)
n
+
pn x(0)
n
2012
Lecture 9
27
where x(k) denote time derivative. Now p corresponds to a stable
differential equation, and xn → 0 exponentially as t → ∞ . The
state relations xk−1 = ẋk now give x → 0 exponentially as t → ∞.
=
p1 x(n−1)
n
0 = σ(x) = p1 x1 + · · · + pn−1 xn−1 + pn xn
so x tend to σ(x)
V̇ = −µσ(x) sgn σ(x) < 0
With the chosen control law, we get
V̇ = σ T (x)σ̇(x) = xT p pT f (x) + pT g(x)u
Consider V (x)
EL2620
> 0:
Lecture 9
Note that ts
→ 0 as µ → ∞.
ts =
= 0) is
σ0
<∞
µ
Hence, the time to the first switch (σ(x)
σ̇(x) = −µ
it follows that as long as σ(x)
σ(x)σ̇(x) = −µσ(x) sgn σ(x)
= σ(x0 ) > 0. Since
Time to Switch
Consider an initial point x0 such that σ0
EL2620
Lecture 9
Lecture 9
µ
pT f (x)
− T
sgn σ(x),
T
p g(x) p g(x)
> 0 is a design
parameter, σ(x) = pT x, and
pT = p1 . . . pn are the coefficients of a stable polynomial.
u=−
Choose control law
where µ
25
Design of Sliding Mode Controller
2012
28
2012
26
Idea: Design a control law that forces the state to σ(x) = 0. Choose
σ(x) such that the sliding mode tends to the origin. Assume
  

x1
f1 (x) + g1 (x)u
 

x1
d 
 x2  

 ..  = 
 = f (x) + g(x)u
..
dt  .  

.
xn
xn−1
EL2620
but this transfer function is also 1/(sG(s)) Hence, the eigenvalues of
(I − BC/CB)A are equal to the zeros of sG(s).
−1
1
1
−1
+
CA(sI − ((I −
BC)A))−1
B
CB CB
CB
CB
Hence, the transfer function from ẏ to u equals
1
1
CAx −
ẏ ⇒
y = Cx ⇒ ẏ = CAx + CBu ⇒ u =
CB
CB
1
1
ẋ = I −
BC Ax −
B ẏ
CB
CB
ẋ = Ax + Bu
EL2620
Example—Sliding Mode Controller
T
.
σ0
=3
µ
Lecture 9
ẏ = −y
u
x1
Time Plots
x2
µ
pT Ax
− T sgn σ(x)
u=− T
p B
p B
= 2x1 − µ sgn(x1 + x2 )
and sliding dynamics
ts =
2012
31
2012
29
= s + 1 so that σ(x) = x1 + x2 . The controller is
Simulation agrees well with
time to switch
x(0) = 1.5 0
Initial condition
EL2620
Lecture 9
Choose p1 s + p2
given by
1 0
1
ẋ =
x+
u
1 0
0
y= 0 1 x
Design state-feedback controller for
EL2620
−1
0
1
2
x1
The Sliding Mode Controller is Robust
−2
= sgn(pT gb) and µ > 0 is sufficiently large.
Lecture 9
(High gain control with stable open loop zeros)
The closed-loop system is thus robust against model errors!
if sgn(pT g)
pT g
pT (f gbT − fbg T )p
− µ T sgn σ(x) < 0
V̇ = σ(x)
pT gb
p gb
32
2012
30
2012
= 0.5. Note the sliding surface σ(x) = x1 + x2 .
Phase Portrait
Assume that only a model ẋ = fb(x) + g
b(x)u of the true system
ẋ = f (x) + g(x)u is known. Still, however,
EL2620
Lecture 9
0
1
−1
x2
Simulation with µ
EL2620
Comments on Sliding Mode Control
Next Lecture
Lecture 9
• Exact feedback linearization
• Lyapunov design methods
EL2620
Lecture 9
• Compare puls-width modulated control signals
• Applications in robotics and vehicle control
• Smooth version through low pass filter or boundary layer
• Often impossible to implement infinite fast switching
• Efficient handling of model uncertainties
EL2620
35
2012
33
2012
Today’s Goal
Lecture 9
• Sliding mode controllers
• High-gain control systems
You should be able to analyze and design
EL2620
34
2012
• Lyapunov-based control design methods
• Input-output linearization
• Exact feedback linearization
Lecture 10
EL2620 Nonlinear Control
Lecture 10
• Linear controller: ż = Az + By , u = Cz
3
2012
1
2012
• Linear dynamics, static nonlinearity: ż = Az + By , u = c(z)
• Nonlinear dynamical controller: ż = a(z, y), u = c(z)
Nonlinear Output Feedback Controllers
EL2620
Lecture 10
EL2620
Nonlinear Observers
x
ḃ = f (b
x, u)
Lecture 10
Second case is called Extended Kalman Filter
• Linearize f at x
b(t), find K = K(b
x) for the linearization
• Linearize f at x0 , find K for the linearization
Choices of K
x
ḃ = f (b
x, u) + K(y − h(b
x))
Feedback correction, as in linear case,
Simplest observer
ẋ = f (x, u), y = h(x)
What if x is not measurable?
EL2620
Lecture 10
k and ℓ may include dynamics.
has nice properties.
ẋ = f (x, ℓ(x))
• State feedback: Find u = ℓ(x) such that
• Output feedback: Find u = k(y) such that the closed-loop
system
ẋ = f x, k h(x)
ẋ = f (x, u)
y = h(x)
Output Feedback and State Feedback
has nice properties.
EL2620
4
2012
2
2012
ẋ2 = −x21 + u
ẋ1 = a sin x2
Another Example
ż1 = z2 , ż2 = a cos x2 (−x21 + u)
Lecture 10
v
, x2 ∈ [−π/2, π/2]
u(x) = x21 +
a cos x2
and the linearizing control becomes
yields
z1 = x1 , z2 = ẋ1 = a sin x2
Perform transformation of states into linearizable form:
How do we cancel the term sin x2 ?
EL2620
Lecture 10
• Lyapunov-based design - backstepping control
• Input-output linearization
• Exact feedback linearization
Some state feedback control approaches
EL2620
7
2012
5
2012
Exact Feedback Linearization
ẋ = cos x − x3 + u
Lecture 10
with (A, B) controllable and γ(x) nonsingular for all x in the domain
of interest.
ẋ = Ax + Bγ(x) (u − α(x))
8
2012
is feedback linearizable if there exist a diffeomorphism T whose
domain contains the origin and transforms the system into the form
ẋ = f (x) + g(x)u
Definition: A nonlinear system
is called a diffeomorphism
• T and T −1 continuously differentiable
• T invertible for x in the domain of interest
= T (x) with
Diffeomorphisms
ẋ = −kx + v
A nonlinear state transformation z
EL2620
Lecture 10
yields the linear system
u(x) = − cos x + x3 − kx + v
The state-feedback controller
Example 1:
6
2012
idea: use a state-feedback controller u(x) to make the system linear
ẋ = f (x) + g(x)u
Consider the nonlinear control-affine system
EL2620
Exact vs. Input-Output Linearization
+ u, y = x2
Lecture 10
u = x1 − ǫ(1 −
The state feedback controller
x21 )x2
+v
⇒
ÿ = v
ÿ = ẋ2 = −x1 + ǫ(1 − x21 )x2 + u
ẏ = ẋ1 = x2
Differentiate the output
ẋ2 = −x1 + ǫ(1 − x21 )x2 + u
y = x1
ẋ1 = x2
Example: controlled van der Pol equation
EL2620
Lecture 10
Caution: the control has rendered the state x1 unobservable
ẋ1 = a sin x2 , ẋ2 = v, y = x2
which is linear from v to y
to obtain
u = x21 + v
11
2012
9
2012
• If we want a linear input-output relationship we could instead use
which is nonlinear in the output.
ż1 = z2 ; ż2 = v ; y = sin−1 (z2 /a)
• The control law u = x21 + v/a cos x2 yields
ẋ1 = a sin x2 , ẋ2 =
−x21
The example again, but now with an output
EL2620
Input-Output Linearization
2012
= 1/sp
Lie Derivatives
2012
10
Lecture 10
Lkf h(x)
d(Lfk−1 h)
d(Lf h)
f (x), Lg Lf h(x) =
g(x)
=
dx
dx
Repeated derivatives
12
dh
dh
ẋ =
(f (x) + g(x)u) , Lf h(x) + Lg h(x)u
dx
dx
where Lf h(x) and Lg h(x) are Lie derivatives (Lf h is the derivative
of h along the vector field of ẋ = f (x))
ẏ =
The derivative of the output
ẋ = f (x) + g(x)u ; x ∈ Rn , u ∈ R1
y = h(x), y ∈ R
Consider the nonlinear SISO system
EL2620
Lecture 10
i.e., G(s)
y (p) = v
The general idea: differentiate the output, y = h(x), p times untill the
control u appears explicitly in y (p) , and then determine u so that
linear from the input v to the output y
ẋ = f (x) + g(x)u
y = h(x)
Use state feedback u(x) to make the control-affine system
EL2620
Lie derivatives and relative degree
2012
=n−m
The input-output linearizing control
Lecture 10
z }| {
dh
ẋ = Lf h(x) + Lg h(x)u
dx
=0
y (p) = Lpf h(x) + Lg Lfp−1 h(x)u
..
.
ẏ =
Differentiating the output repeatedly
ẋ = f (x) + g(x)u
y = h(x)
Consider a nth order SISO system with relative degree p
EL2620
Lecture 10
15
2012
13
Lg Lfi−1 h(x) = 0, i = 1, . . . , p−1 ; Lg Lfp−1 h(x) 6= 0 ∀x ∈ D
• A nonlinear system has relative degree p if
has relative degree p
b0 sm + . . . + bm
Y (s)
= n
U (s)
s + a1 sn−1 + . . . + an
• A linear system
integrators between the input and the output (the number of times
y must be differentiated for the input u to appear)
• The relative degree p of a system is defined as the number of
EL2620
Example
Lg Lfp−1 h(x)
1
Lecture 10
=2
−Lpf h(x) + v
y (p) = v
results in the linear input-output system
u=
and hence the state-feedback controller
EL2620
Lecture 10
Thus, the system has relative degree p
ÿ = ẋ2 = −x1 + ǫ(1 − x21 )x2 + u
ẏ = ẋ1 = x2
Differentiating the output
ẋ2 = −x1 + ǫ(1 − x21 )x2 + u
y = x1
ẋ1 = x2
The controlled van der Pol equation
EL2620
16
2012
14
2012
Zero Dynamics
2012
Lecture 10
Here we limit discussion to Back-stepping control design, which
require certain f discussed later.
• Methods depend on structure of f
• Verify stability through Control Lyapunov function
• Find stabilizing state feedback u = u(x)
ẋ = f (x, u)
17
19
2012
Lyapunov-Based Control Design Methods
EL2620
Lecture 10
phase (and should not be input-output linearized!)
• A system with unstable zero dynamics is called non-minimum
• The dynamics of the n − p states not observable in the linearized
dynamics of y are called the zero dynamics. Corresponds to the
dynamics of the system when y is forced to be zero for all times.
• Thus, if p < n then n − p states are unobservable in y .
the relative degree of the system
• Note that the order of the linearized system is p, corresponding to
EL2620
ẋ2 = −x1 + ǫ(1 − x21 )x2 + u
ẋ1 = x2
van der Pol again
2012
ẋ = cos x − x3 + u
A simple introductory example
= x2 /2
Lecture 10
18
20
2012
But, the term x3 in the control law may require large control moves!
Thus, the system is globally asymptotically stable
V (x) > 0 , V̇ = −kx2 < 0
Choose the Lyapunov candidate V (x)
u = − cos x + x3 − kx
Apply the linearizing control
Consider
EL2620
Lecture 10
(but bounded)
• With y = x2 the relative degree p = 1 < n and the zero
dynamics are given by ẋ1 = 0, which is not asymptotically stable
• With y = x1 the relative degree p = n = 2 and there are no
zero dynamics, thus we can transform the system into ÿ = v .
EL2620
u = − cos x − kx
= 0 with f (0) = 0.
R
(1)
2012
21
Lecture 10
u
R
x2
g(x1 )
f ()
x1
23
Idea: See the system as a cascade connection. Design controller first
for the inner loop and then for the outer.
at x
ẋ1 = f (x1 ) + g(x1 )x2
ẋ2 = u
= u(x) that stabilizes
Back-Stepping Control Design
We want to design a state feedback u
EL2620
Lecture 10
Thus, also globally asymptotically stable (and more negative V̇ )
V (x) > 0 , V̇ = −x4 − kx2 < 0
= x2 /2
ẋ = cos x − x3 + u
Choose the same Lyapunov candidate V (x)
Now try the control law
The same example
2012
0.5
1
1.5
2
2.5
time [s]
3
3.5
State trajectory
4
4.5
linearizing
non-linearizing
5
-200
0
0
200
400
600
800
1000
0.5
1
1.5
2
2.5
time [s]
3
Control input
3.5
φ(x1 ) and there exists Lyapunov fcn
Lecture 10
This is a critical assumption in backstepping control!
for some positive definite function W .
dV1
V̇1 (x1 ) =
f (x1 ) + g(x1 )φ(x1 ) ≤ −W (x1 )
dx1
can be stabilized by v̄ =
V1 = V1 (x1 ) such that
ẋ1 = f (x1 ) + g(x1 )v̄
Suppose the partial system
EL2620
Lecture 10
The linearizing control is slower and uses excessive input. Thus,
linearization can have a significant cost!
0
0
1
2
3
4
5
6
7
8
9
10
= 10
Simulating the two controllers
Simulation with x(0)
EL2620
State x
EL2620
Input u
4
24
2012
22
4.5
linearizing
non-linearizing
2012
5
The Trick
R
−φ(x1 )
x2
g(x1 )
f + gφ
R
x1
dV1
g(x1 ) − kζ,
dx1
R
ζ
g(x1 )
R
f + gφ
x1
≤ −W ). Then,
Lecture 10
Lecture 10
2012
26
2012
28
= 0 with V (z) + (xk − φ(z))2 /2 being a Lyapunov fcn.
dV
dφ
f (z) + g(z)xk −
g(z) − (xk − φ(z))
u=
dz
dz
stable, and V (z) a Lyapunov fcn (with V̇
ż = f (z) + g(z)φ(z)
= 0, f (0) = 0,
ż = f (z) + g(z)xk
ẋk = u
= (x1 , . . . , xk−1 )T and
Back-Stepping Lemma
−φ̇(x1 )
Assume φ(0)
stabilizes x
2
k>0
V̇2 (x1 , x2 ) ≤ −W (x1 ) − kζ
v=−
u
ζ̇ = v
dφ
dφ
ẋ1 =
f (x1 ) + g(x1 )x2
φ̇(x1 ) =
dx1
dx1
Lemma: Let z
EL2620
Lecture 10
where
= x2 − φ(x1 ) and control v = u − φ̇:
ẋ1 = f (x1 ) + g(x1 )φ(x1 ) + g(x1 )ζ
Introduce new state ζ
EL2620
= 0 is asymptotically stable for (1) with control law
u(x) = φ̇(x) + v(x).
If V1 radially unbounded, then global stability.
Hence, x
gives
Choosing
Consider V2 (x1 , x2 )
27
2012
25
2012
= V1 (x1 ) + ζ 2 /2. Then,
dV1
dV1
f (x1 ) + g(x1 )φ(x1 ) +
g(x1 )ζ + ζv
V̇2 (x1 , x2 ) =
dx1
dx1
dV1
≤ −W (x1 ) +
g(x1 )ζ + ζv
dx1
EL2620
Lecture 10
u
ẋ1 = f (x1 ) + g(x1 )φ(x1 ) + g(x1 )[x2 − φ(x1 )]
ẋ2 = u
Equation (1) can be rewritten as
EL2620
Strict Feedback Systems
Back-Stepping
Example
32
Lecture 10
Lecture 10
u = φn (x1 , . . . , xn )
V2 = x21 /2 + (x2 + x21 + x1 )2 /2
2012
30
2012
u1 = (−2x1 − 1)(x21 + x2 ) − x1 − (x2 + x21 + x1 ) = φ2 (x1 , x2 )
= −x21 − x1 stabilizes the first equation. With
V1 (x1 ) = x21 /2, Back-Stepping Lemma gives
where φ1 (x1 )
ẋ1 = x21 + φ1 (x1 ), ẋ2 = u1
Step 0 Verify strict feedback form
Step 1 Consider first subsystem
ẋ1 = x21 + x2 , ẋ2 = x3 , ẋ3 = u
Design back-stepping controller for
EL2620
Lecture 10
ẋ = Ax + Bu on strict feedback form.
2 minute exercise: Give an example of a linear system
EL2620
Back-stepping results in the final state feedback
31
2012
29
2012
by “stepping back” from x1 to u (see Khalil pp. 593–594 for details).
Vk (x1 , . . . , xk ) = Vk−1 (x1 , . . . , xk−1 ) + [xk − φk−1 ]2 /2
Back-stepping generates stabilizing feedbacks φk (x1 , . . . , xk )
(equal to u in Back-Stepping Lemma) and Lyapunov functions
on strict feedback form.
ẋ = f (x) + g(x)u
Back-Stepping Lemma can be applied recursively to a system
EL2620
6= 0
x1 , . . . , xk do not depend on xk+2 , . . . , xn .
Lecture 10
Note:
where gk
ẋn = fn (x1 , . . . , xn ) + gn (x1 , . . . , xn )u
..
.
ẋ1 = f1 (x1 ) + g1 (x1 )x2
ẋ2 = f2 (x1 , x2 ) + g2 (x1 , x2 )x3
ẋ3 = f3 (x1 , x2 , x3 ) + g3 (x1 , x2 , x3 )x4
Back-stepping Lemma can be applied to stabilize systems on strict
feedback form:
EL2620
Lecture 10
which globally stabilizes the system.
33
2012
dφ2
dV2
g(z) − (xn − φ2 (z))
f (z) + g(z)xn −
u = u2 =
dz
dz
∂φ2 2
∂φ2
∂V2
=
(x1 + x2 ) +
x3 −
− (x3 − φ2 (x1 , x2 ))
∂x1
∂x2
∂x2
gives
ẋ1 = x21 + x2
ẋ2 = x3
ẋ3 = u
Step 2 Applying Back-Stepping Lemma on
EL2620
0
1
ẋ = f (x, u)
Controllability
• Gain scheduling
• Nonlinear controllability
Lecture 11
EL2620 Nonlinear Control
Lecture 11
is controllable if for any x , x there exists T > 0 and
u : [0, T ] → R such that x(0) = x0 and x(T ) = x1 .
Definition:
EL2620
Lecture 11
EL2620
3
2012
1
2012
Today’s Goal
Lecture 11
Is there a corresponding result for nonlinear systems?
has full rank.
Wn = B AB . . . An−1 B
ẋ = Ax + Bu
Linear Systems
is controllable if and only if
Lemma:
EL2620
Lecture 11
• Apply gain scheduling to simple examples
• Determine if a nonlinear system is controllable
You should be able to
EL2620
4
2012
2
2012
2012
 
0
 
0 
B1 =   ,
0 
1
= 0 and
ż = Az + B1 u1 + B2 u2
= u2 = 0 gives
Lecture 11
5
7
2012
Linearization does not capture the controllability good enough
linearization is not controllable. Still the car is controllable!


cos(ϕ0 + θ0 )


 sin(ϕ0 + θ0 ) 
B2 = 

 sin(θ0 ) 
0
rank Wn = rank B AB . . . An−1 B = 2 < 4, so the
with A
Linearization for u1
EL2620
Lecture 11
system is not controllable
• A nonlinear system can be controllable, even if the linearized
• Hence, if rank Wn = n then there is an ǫ > 0 such that for every
x1 ∈ Bǫ (0) there exists u : [0, T ] → R so that x(T ) = x1
Remark:
at x = 0 with f (0) = 0. If the linear system is controllable then the
nonlinear system is controllable in a neighborhood of the origin.
ẋ = f (x) + g(x)u
ż = Az + Bu
Controllable Linearization
be the linearization of
Lemma: Let
EL2620
(x, y)
y
θ
ϕ
x
Input:
velocity
u1 steering wheel velocity, u2 forward
Car Example
2012
∂f
∂g
f−
g
∂x
∂x
Lecture 11
x1
cos x2
,
g=
f=
1
x1
0 − sin x2
x1
1 0
cos x2
−
[f, g] =
1
0
1
0 0
x1
cos x2 + sin x2
=
−x1
Example:
[f, g] =
: Rn → Rn is a vector field
Lie Brackets
Lie bracket between vector fields f, g
defined by
EL2620
Lecture 11
8
2012
6
   


x
0
cos(ϕ + θ)
  


d 
 y  0 
 sin(ϕ + θ) 
  =   u1 + 
 u2 = g1 (z)u1 + g2 (z)u2
dt ϕ 0
 sin(θ) 
θ
1
0
EL2620
4. Finally, for t
Lecture 11
t ∈ [0, ǫ)
t ∈ [ǫ, 2ǫ)
t ∈ [2ǫ, 3ǫ)
t ∈ [3ǫ, 4ǫ)
∈ [2ǫ, 3ǫ]
2
x(4ǫ) = x0 + ǫ
∈ [3ǫ, 4ǫ]
2
dg1
dg2
g1 −
g2
dx
dx
dg1
1 dg2
dg2
g1 −
g2 +
g2
dx
dx
2 dx
Proof, continued
x(3ǫ) = x0 + ǫg2 + ǫ
3. Similarily, for t
EL2620
Lecture 11




(1, 0),
(0, 1),
(−1, 0),
(0, −1),
x(4ǫ) = x(0) + ǫ2 [g1 , g2 ] + O(ǫ3 )
(u1 , u2 ) =





ẋ = g1 (x)u1 + g2 (x)u2
Lie Bracket Direction
The system can move in the [g1 , g2 ] direction!
gives motion
the control
For the system
EL2620
11
2012
9
2012
Proof
= g2 (x0 ) +
dg2
dx ǫg1 (x0 )
Car Example (Cont’d)
Lecture 11
∂g2
∂g1
g3 := [g1 , g2 ] =
g1 −
g2
∂x
∂x

 
0 0 − sin(ϕ + θ) − sin(ϕ + θ)
0

 
cos(ϕ + θ)  0
0 0 cos(ϕ + θ)
=
  − 0
0
cos(θ)  0
0 0
0 0
0
0
1


− sin(ϕ + θ)


 cos(ϕ + θ) 
=

 cos(θ) 
0
EL2620
Lecture 11
12
2012
10
x(2ǫ) = x0 + ǫ(g1 (x0 ) + g2 (x0 ))+
dg2
1 dg2
2 1 dg1
(x0 )g1 (x0 ) +
(x0 )g1 (x0 ) +
(x0 )g2 (x0 )
ǫ
2 dx
dx
2 dx
and with x(ǫ) from (1), and g2 (x(ǫ))
1 dg2
g2 (x(ǫ))ǫ2
2 dx
1 dg1
g1 (x0 )ǫ2 + O(ǫ3 )
2 dx
x(2ǫ) = x(ǫ) + g2 (x(ǫ))ǫ +
∈ [ǫ, 2ǫ]
x(ǫ) = x0 + g1 (x0 )ǫ +
(1)
2012
∈ [0, ǫ], assuming ǫ small and x(0) = x0 , Taylor series yields
2. Similarily, for t
1. For t
EL2620
2012
ϕ
x
Parking Theorem
θ
Lecture 11
Wriggle, Drive, −Wriggle, −Drive
13
15
2012
You can get out of any parking lot that is ǫ > 0 bigger than your car
by applying control corresponding to g4 , that is, by applying the
control sequence
EL2620
Lecture 11
(x, y)
y
(u1 , u2 ) = {(1, 0), (0, 1), (−1, 0), (0, −1)}
We can hence move the car in the g3 direction (“wriggle”) by applying
the control sequence
EL2620

2012
14
2012
Lecture 11
16
2 minute exercise: What does the direction [g1 , g2 ] correspond to for
a linear system ẋ = g1 (x)u1 + g2 (x)u2 = B1 u1 + B2 u2 ?
EL2620
Lecture 11
sideways movement
g4 direction corresponds to
(−sin(ϕ), cos(ϕ))

− sin(ϕ + 2θ)
 cos(ϕ + 2θ) 
∂g3
∂g2


g3 −
g2 = . . . = 
g4 := [g3 , g2 ] =

∂x
∂x
0


0
The car can also move in the direction
EL2620
[g1 , [g2 , [g1 , g2 ]]]

Lecture 11
Controllable because {g1 , g2 , [g1 , g2 ]} spans R3
 


cos θ
0
sin θ
g1 =  sin θ  , g2 = 0 , [g1 , g2 ] = − cos θ
0
1
0

2012
19
2012
17
[g2 , [g2 , [g1 , g2 ]]]
[g2 , [g1 , g2 ]]
θ
(x1 , x2 )
Example—Unicycle
[g2 , [g1 , [g1 , g2 ]]]
[g1 , g2 ]
The Lie Bracket Tree
  

 
x1
cos θ
0
d   
x2 = sin θ  u1 + 0 u2
dt
θ
0
1
EL2620
Lecture 11
[g1 , [g1 , [g1 , g2 ]]]
[g1 , [g1 , g2 ]]
EL2620
2012
Lecture 11
• Is the linearization of the unicycle controllable?
• Show that {g1 , g2 , [g1 , g2 ]} spans R3 for the unicycle
2 minute exercise:
EL2620
Lecture 11
20
2012
18
• The system can be steered in any direction of the Lie bracket tree
Remark:
is controllable if the Lie bracket tree (together with g1 and g2 ) spans
Rn for all x
ẋ = g1 (x)u1 + g2 (x)u2
Controllability Theorem
Theorem:The system
EL2620
φ
θ
Gain Scheduling
Controller
Process
Gain
schedule
Output
Operating
condition
2012
21
Lecture 11
23
Examples of scheduling variable are production rate, machine speed,
Mach number, flow rate
= K(α), where α is the scheduling
Control
signal
Example: PID controller with K
variable.
Command
signal
Controller
parameters
Control parameters depend on operating conditions:
EL2620
Lecture 11
2012
 

  
0
cos θ
x1






d x2   sin θ 
0 
 u1 +   u2
 =
dt  θ   0 
1 
0
1
φ
Example—Rolling Penny
Controllable because {g1 , g2 , [g1 , g2 ], [g2 , [g1 , g2 ]]} spans R4
(x1 , x2 )
EL2620
2012
Lecture 11
EL2620
Lecture 11
v
u
α
ẋ = f
x
T
Flow
Linear
Quickopening
Equalpercentage
Position
Valve Characteristics
β
z
A: The answer requires Lie brackets and further concepts from
differential geometry (see Khalil and PhD course)
24
2012
22
Q: When can we transform ẋ = f (x) + g(x)u into ż = Az + bv by
means of feedback u = α(x) + β(x)v and change of variables
z = T (x) (see previous lecture)?
When is Feedback Linearization Possible?
EL2620
uc
Σ
0
Lecture 11
5.0
5.1
1.0
1.1
0.2
0.3
0
0
0
uc
y
uc
uc
y
y
With gain scheduling:
EL2620
Lecture 11
0
10
Valve characteristics
EL2620
20
20
20
PI
0.5
c
40
40
40
−1
fˆ −1
1
u
60
60
60
1.5
v
fˆ(u)
f
Process
Nonlinear Valve
80
80
80
2
f (u)
G 0 (s)
100
Time
100
Time
100
Time
y
27
2012
25
2012
0
0
0
Lecture 11
θ
uc
uc
y
uc
V
α
q = θ˙
Pitch dynamics
EL2620
Lecture 11
5.0
5.2
1.0
1.1
0.2
0.3
y
y
20
20
20
Nz
Flight Control
10
10
10
Without gain scheduling:
EL2620
30
30
30
δe
40
Time
40
Time
40
Time
28
2012
26
2012
0
1.6
0.8
1.2
Mach number
2
4
Today’s Goal
0.4
1
3
Flight Control
2.0
Lecture 11
• Apply gain scheduling to simple examples
• Determine if a nonlinear system is controllable
You should be able to
EL2620
Lecture 11
0
20
40
60
80
Operating conditions:
EL2620
Altitude (x1000 ft)
2.4
31
2012
29
2012
Lecture 11
EL2620
Filter
Pitch rate
Filter
Acceleration
Filter
Position
Pitch stick
A/D
A/D
A/D
M
T2 s
1+ T2 s
1
1+ T3 s
T1s
1+ T1s
H
K DSE
VIAS
H M VIAS
K QD
K Q1
M
Σ
K SG
H
Gear
VIAS H
M H
K NZ
Filter
−
Σ
Σ
D/A
D/A
Σ
Σ
To servos
Σ
The Pitch Control Channel
30
2012
Optimal Control Problems
• Optimal control
Lecture 12
EL2620 Nonlinear Control
2012
1
2012
Lecture 12
- determining the optimal controller can be hard
- difficult to formulate control objectives as a single objective
function
+ can deal with constraints
+ applicable to nonlinear problems
+ provides a systematic design framework
u(t)
min J(x, u, t), ẋ = f (t, x, u)
3
Idea: formulate the control design problem as an optimization problem
EL2620
Lecture 12
EL2620
Today’s Goal
Lecture 12
v(x2 )
ẋ1 (t) = v(x2 ) + u1 (t)
ẋ2 (t) = u2 (t)
x1 (0) = x2 (0) = 0
max x1 (tf )
u:[0,tf ]→U
u(t) ∈ U = {(u1 , u2 ) : u21 + u22 = 1}
Rudder angle control:
=1
x2
Sail as far as possible in x1 direction
Example—Boat in Stream
Speed of water v(x2 ) with dv/dx2
EL2620
Lecture 12
• Design controllers based on optimal control theory
You should be able to
EL2620
x1
4
2012
2
2012
min
0
tf
L(x(t), u(t)) dt + φ(x(tf ))
ẋ(t) = f (x(t), u(t)), x(0) = x0
u:[0,tf ]→U
Z
Lecture 12
• Final time tf fixed (free later)
• Constraints on x from the dynamics
• Infinite dimensional optimization problem:
Optimization over functions u : [0, tf ] → U
• U ⊂ Rm set of admissible control
Remarks:
[1 − u(t)]x(t)dt
ẋ(t) = γu(t)x(t)
x(0) = x0 > 0
0
tf
> 0)
Optimal Control Problem
Standard form:
EL2620
Lecture 12
max
u:[0,tf ]→[0,1]
Z
amount of stored profit
change of production rate (γ
portion of x stored
portion of x reinvested
production rate
Maximization of stored profit
Example—Resource Allocation
x(t) ∈ [0, ∞)
u(t) ∈ [0, 1]
1 − u(t)
γu(t)x(t)
[1 − u(t)]x(t)
EL2620
7
2012
5
2012
Example—Minimal Curve Length
2012
p
a
Pontryagin’s Maximum Principle
1 + u2 (t)dt
tf
Lecture 12
u∈U
u∗ (t) = arg min H(x∗ (t), u, λ(t))
Moreover, the optimal control is given by
∂H T ∗
∂φT ∗
∗
λ̇(t) = −
(x (t), u (t), λ(t)), λ(tf ) =
(x (tf ))
∂x
∂x
where λ(t) solves the adjoint equation
u∈U
6
8
2012
t
min H(x∗ (t), u, λ(t)) = H(x∗ (t), u∗ (t), λ(t)), ∀t ∈ [0, tf ]
Suppose the optimal control problem above has the solution
u∗ : [0, tf ] → U and x∗ : [0, tf ] → Rn . Then,
H(x, u, λ) = L(x, u) + λT f (x, u)
Theorem: Introduce the Hamiltonian function
EL2620
Lecture 12
0
tf
ẋ(t) = u(t)
x(0) = a
u:[0,tf ]→R
min
Z
(t, x(t)) with x(0) = a
Line: Vertical through (tf , 0)
Curve:
x(t)
Find the curve with minimal length between a given point and a line
EL2620
Remarks
2012
= 0)
λ1 (t)
u1 +u2 =1
λ21 (t) + λ22 (t)
, u2 (t) = − p
λ2 (t)
+ u1 ) + λ2 (t)u2
1
tf − t
, u2 (t) = p
1 + (t − tf )2
1 + (t − tf )2
λ21 (t) + λ22 (t)
u1 (t) = p
Lecture 12
λ1 (t)(v(x∗2 (t))
λ1 (t)u1 + λ2 (t)u2
= arg 2min
2
u1 (t) = − p
Hence,
or
u1 +u2 =1
u (t) = arg 2min
2
∗
Optimal control
EL2620
Lecture 12
• “maximum” is due to Pontryagin’s original formulation
9
11
2012
• Solution involves 2n ODE’s with boundary conditions x(0) = x0
and λ(tf ) = ∂φT /∂x(x∗ (tf )). Often hard to solve explicitly.
• The Maximum Principle provides all possible candidates.
there may exist many or none solutions
(cf., minu:[0,1]→R x(1), ẋ = u, x(0)
• Pontryagin’s Maximum Principle provides necessary condition:
increase the criterium. Then perform a clever Taylor expansion.
• See textbook, e.g., Glad and Ljung, for proof. The outline is simply
to note that every change of u(t) from the optimal u∗ (t) must
EL2620
λ2 (tf ) = 0
λ1 (tf ) = −1
λ1 (t) = −1, λ2 (t) = t − tf
λ̇2 (t) = −λ1 (t),
λ̇1 (t) = 0,
φ(x) = −x1
v(x2 ) + u1
λ2
u2
x(0) = x0
[u(t) − 1]x(t)dt
λ̇(t) = 1 − u∗ (t) − λ(t)γu∗ (t),
Adjoint equation
λ(tf ) = 0
H = L + λT f = (u − 1)x + λγux
Hamiltonian satisfies
Lecture 12
0
tf
ẋ(t) = γu(t)x(t),
min
u:[0,tf ]→[0,1]
Z
Example—Resource Allocation (cont’d)
EL2620
Lecture 12
have solution
Adjoint equations
∂H
= 0 λ1 ,
∂x
H = λ f = λ1
T
Example—Boat in Stream (cont’d)
Hamiltonian satisfies
EL2620
12
2012
10
2012
= arg min u(1 + λ(t)γ),
u∈[0,1]
0,
λ(t) ≥ −1/γ
=
1,
λ(t) < −1/γ
(x (t) > 0)
∗
< tf − 1/γ , we have u∗ (t) = 1 and thus λ̇(t) = −γλ(t).
Lecture 12
min
0
tf
p
ẋ(t) = u(t),
u:[0,tf ]→R
Z
x(0) = a
1 + u2 (t)dt
5 minute exercise: Find the curve with minimal length by solving
EL2620
∗
≈ tf , we have u∗ (t) = 0 (why?) and thus λ̇(t) = 1.
Lecture 12
For t
For t
u∈[0,1]
∗
u (t) = arg min (u − 1)x (t) + λ(t)γux (t)
∗
Optimal control
EL2620
15
2012
13
2012
0
0.4
0.4
0.6
0.6
0.8
0.8
1
1
u (t)
∗
1.2
1.2
1.4
1.4
1.6
1.6
1.8
1.8
2
2
Lecture 12
0
1
u4 dt + x(1)
ẋ = −x + u
x(0) = 0
min
Z
5 minute exercise II: Solve the optimal control problem
EL2620
Lecture 12
t ∈ [0, tf − 1/γ]
t ∈ (tf − 1/γ, tf ]
0.2
0.2
λ(t)
• It is optimal to reinvest in the beginning
0
0
0.2
0.4
0.6
0.8
1
-1
-0.8
-0.6
-0.4
-0.2
0
1,
∗
• u (t) =
0,
EL2620
16
2012
14
2012
History—Calculus of Variations
A
B
Lecture 12
maxu h(tf )
0 ≤ u ≤ umax and m(tf ) = m1 (empty)
Optimization criterion:
Constraints:
u motor force, D = D(v, h) air resistance
(v(0), h(0), m(0)) = (0, 0, m0 ), g, γ > 0
h
m
How to send a rocket as high up in the air as possible?
Goddard’s Rocket Problem (1910)

  u − D
v
− g
d   
m


h = 
v

dt
m
−γu
EL2620
Lecture 12
• Find the curve enclosing largest area (Euler)
Solved by John and James Bernoulli, Newton, l’Hospital
J(y) =
Z
p
1 + y ′ (x)2
p
dx
2gy(x)
p
p
1 + y ′ (x)2
dx2 + dy 2
ds
dt =
=
= p
dx
v
v
2gy(x)
Minimize
time
19
2012
17
2012
• Brachistochrone (shortest time) problem (1696): Find the
(frictionless) curve that takes a particle from A to B in shortest
EL2620
History—Optimal Control
Z
0
tf
L(x(t), u(t)) dt + φ(x(tf ))
Lecture 12
• Final state is constrained: ψ(x(tf )) = x3 (tf ) − m1 = 0
• End time tf is free
Note the diffences compared to standard form:
ẋ(t) = f (x(t), u(t)), x(0) = x0
ψ(x(tf )) = 0
u:[0,tf ]→U
min
Generalized form:
EL2620
Lecture 12
– Finance—portfolio theory
– Physics—Snell’s law, conservation laws
– Aeronautics—satellite orbits
– Robotics—trajectory generation
• Huge influence on engineering and other sciences:
• Bellman’s Dynamic Programming (1957)
• Pontryagin’s Maximum Principle (1956)
• The space race (Sputnik, 1957)
EL2620
20
2012
18
2012
Solution to Goddard’s Problem
H(x∗ (tf ), u∗ (tf ), λ(tf ), n0 )
∂ψ
∂φ
(tf , x∗ (tf ))
= −n0 (tf , x∗ (tf )) − µT
∂t
∂t
∂H T ∗
(x (t), u∗ (t), λ(t), n0 )
λ̇(t) = −
∂x
∂φ
∂ψ
λT (tf ) = n0 (tf , x∗ (tf )) + µT
(tf , x∗ (tf ))
∂x
∂x
Lecture 12
• ψ defines end point constraints
• With fixed tf : H(x∗ (tf ), u∗ (tf ), λ(tf ), n0 ) = 0
• tf may be a free variable
Remarks:
EL2620
Lecture 12
• Took 50 years before a complete solution was presented
resistance is high, in order to burn fuel at higher level
• Hard: e.g., it can be optimal to have low speed when air
D(v, h) 6≡ 0:
• Burn fuel as fast as possible, because it costs energy to lift it
• Easy: let u(t) = umax until m(t) = m1
D(v, h) ≡ 0:
23
2012
21
2012
x = (v, h, m)T , L ≡ 0, φ(x) = −x2 , ψ(x) = x3 − m1
Goddard’s problem is on generalized form with
EL2620
L(x(t), u(t)) dt + φ(tf , x(tf ))
≥ 0, µ ∈ Rn such that (n0 , µT ) 6= 0 and
2012
Example—Minimum Time Control
H(x, u, λ, n0 ) = n0 L(x, u) + λT f (x, u)
2012
22
min
0
tf
1 dt =
tf
ψ(x(tf )) = (x1 (tf ), x2 (tf ))T = (0, 0)T
u∗ (t) = arg min 1 + λ1 (t)x∗2 (t) + λ2 (t)u
u∈[−1,1]
1,
λ2 (t) < 0
=
−1,
λ2 (t) ≥ 0
Optimal control is the bang-bang control
Lecture 12
min
u:[0,tf ]→[−1,1]
ẋ1 (t) = x2 (t), ẋ2 (t) = u(t)
u:[0,tf ]→[−1,1]
Z
24
Bring the states of the double integrator to the origin as fast as possible
EL2620
Lecture 12
where
min H(x∗ (t), u, λ(t), n0 ) = H(x∗ (t), u∗ (t), λ(t), n0 ), t ∈ [0, tf ]
Then, there exists n0
u∈U
0
tf
: [0, tf ] → U and x∗ : [0, tf ] → Rn are
ẋ(t) = f (x(t), u(t)), x(0) = x0
ψ(tf , x(tf )) = 0
u:[0,tf ]→U
min
Z
Theorem: Suppose u∗
solutions to
General Pontryagin’s Maximum Principle
EL2620
x1 (t) = x1 (0) + x2 (0)t + ζt2 /2
x2 (t) = x2 (0) + ζt
= ζ = ±1, we have
min
u:[0,∞)→Rm
Lecture 12
where L
0
∞
(xT Qx + uT Ru) dt
SA + AT S + Q − SBR−1 B T S = 0
= R−1 B T S and S > 0 is the solution to
u = −Lx
ẋ = Ax + Bu
Z
Linear Quadratic Control
has optimal solution
with
EL2620
Lecture 12
These define the switch curve, where the optimal control switch
x1 (t) ± x2 (t)2 /2 = const
Eliminating t gives curves
With u(t)
= 0, λ̇2 (t) = −λ1 (t) gives
λ1 (t) = c1 , λ2 (t) = c2 − c1 t
Adjoint equations λ̇1 (t)
EL2620
27
2012
25
2012
2012
Properties of LQ Control
Lecture 12
1978)
• But, then system may have arbitrarily poor robustness! (Doyle,
If x is not measurable, then one may use a Kalman filter; leads to
linear quadratic Gaussian (LQG) control.
• Phase margin 60 degrees
• Closed-loop system stable with u = −α(t)Lx for
α(t) ∈ [1/2, ∞) (infinite gain margin)
• Stabilizing
EL2620
Lecture 12
• Efficient design method for nonlinear problems
reference value to a linear regulator, which keeps the system
close to the wanted trajectory
• We may use the optimal open-loop solution u∗ (t) as the
28
2012
26
• Optimal control problem makes no distinction between open-loop
control u∗ (t) and closed-loop control u∗ (t, x).
Reference Generation using Optimal Control
EL2620
Move milk in minimum time without spilling
Tetra Pak Milk Race
Pros & Cons for Optimal Control
Lecture 12
− Hard to solve the equations that give optimal controller
− Hard to find suitable criteria
+ Captures limitations (as optimization constraints)
+ Applicable to nonlinear control problems
+ Systematic design procedure
EL2620
Lecture 12
[Grundelius & Bernhardsson,1999]
EL2620
31
2012
29
2012
15
Acceleration
0
0
0.05
0.05
0.1
0.1
0.15
0.15
0.2
0.2
Slosh
0.25
0.25
0.3
0.3
0.35
0.35
0.4
0.4
SF2852 Optimal Control Theory
2012
30
2012
Lecture 12
Applications: Aeronautics, Robotics, Process Control,
Bioengineering, Economics, Logistics
32
Numerical Methods: Numerical solution of optimal control problems
Pontryagin’s Maximum principle: Main results; Special cases such
as time optimal control and LQ control
Dynamic Programming: Discrete & continuous; Principle of
optimality; Hamilton-Jacobi-Bellman equation
http://www.math.kth.se/optsyst/
• Optimization and Systems Theory
• Period 3, 7.5 credits
EL2620
Lecture 12
Optimal time = 375 ms, TetraPak = 540ms
-1
-0.5
0
0.5
1
-15
-10
-5
0
5
10
Given dynamics of system and maximum slosh φ = 0.63, solve
Rt
minu:[0,tf ]→[−10,10] 0 f 1 dt, where u is the acceleration.
EL2620
Today’s Goal
Lecture 12
• Understand possibilities and limitations of optimal control
– Generalized form
– Standard form
• Design controllers based on optimal control theory for
You should be able to
EL2620
33
2012
• Fuzzy logic and fuzzy control
• Artificial neural networks
Lecture 13
EL2620 Nonlinear Control
Fuzzy Control
Lecture 13
IF Speed is High AND Traffic is Heavy
THEN Reduce Gas A Bit
Example of a rule:
• Implement as rules (instead of as differential equations)
• Model operator’s control actions (instead of the plant)
Idea:
• Transfer process knowledge to control algorithm is difficult
• Many plants are manually controlled by experienced operators
EL2620
Lecture 13
Some slides copied from K.-E. Årzén and M. Johansson
EL2620
3
2012
1
2012
Today’s Goal
Model Controller Instead of Plant
Lecture 13
r
−
e
C
u
P
Fuzzy control design
Model manual control → Implement control rules
y
Conventional control design
Model plant P → Analyze feedback → Synthesize controller C
Implement control algorithm
EL2620
Lecture 13
• understand simple neural networks
• understand the basics of fuzzy logic and fuzzy controllers
You should
EL2620
→
4
2012
2
2012
Fuzzy Set Theory
x ∈ A to a certain degree µA (x)
x ∈ A or x 6∈ A
0
Fuzzy Logic
AND Y OR Z
x
Lecture 13
Mimic human linguistic (approximate) reasoning [Zadeh,1965]
Defines logic calculations as X
Fuzzy logic:
AND: µA∩B (x) = min(µA (x), µB (x))
OR: µA∪B (x) = max(µA (x), µB (x))
NOT: µA′ (x) = 1 − µA (x)
Conventional logic:
AND: A ∩ B
OR: A ∪ B
NOT: A′
How to calculate with fuzzy sets (A, µA )?
EL2620
Lecture 13
25
Warm
Cold
10
µW
µC
A fuzzy set is defined as (A, µA )
0
1
Membership function:
µA : Ω → [0, 1] expresses the degree x belongs to A
Fuzzy set theory:
Conventional set theory:
Specify how well an object satisfies a (vague) description
EL2620
7
2012
5
2012
Example
0
Cold
µC
10
0
10
Cold
Lecture 13
0
1
25
µC∩W
Warm
Q1: Is it cold AND warm?
EL2620
Lecture 13
0
1
x
x
0
1
0
10
Cold
µC∪W
25
Warm
Q2: Is it cold OR warm?
25
Warm
µW
= 1/3.
Example
15
Q2: Is x = 15 warm?
A2: It is not really warm since µW (15)
Q1: Is the temperature x = 15 cold?
A1: It is quite cold since µC (15) = 2/3.
EL2620
x
8
2012
6
2012
r
Fuzzy
Controller
u
Plant
Fuzzy Control System
y
Fuzzifier
Lecture 13
0
1
Example
y = 15: µC (15)
0
15
25
Warm
Cold
10
µW
µC
= 2/3 and µW (15) = 1/3
Fuzzy set evaluation of input y
EL2620
Lecture 13
y
Fuzzy controller is a nonlinear mapping from y (and r ) to u
r, y, u : [0, ∞) 7→ R are conventional signals
EL2620
11
2012
9
2012
y
Fuzzifier
Fuzzy Controller
Fuzzy
Inference
u
Fuzzy Inference
Lecture 13
Rule 2:
| {z }
2.
1.
2.
IF y is Warm THEN u
| is{zLow}
| {z }
1.
| {z }
Examples of fuzzy rules:
Rule 1: IF y is Cold THEN u is High
3. Aggregate rule outputs
2. Calculate fuzzy output of each rule
1. Calculate degree of fulfillment for each rule
Fuzzy Inference:
EL2620
Lecture 13
Fuzzifier and defuzzifier act as interfaces to the crisp signals
Defuzzifier
Fuzzy Controller
Fuzzifier: Fuzzy set evaluation of y (and r )
Fuzzy Inference: Fuzzy set calculations
Defuzzifier: Map fuzzy set to u
EL2620
12
2012
10
2012
Lecture 13
EL2620
Lecture 13
3. Aggregate rule outputs
15
2012
13
2012
1. Calculate degree of fulfillment for rules
EL2620
2. Calculate fuzzy output of each rule
Lecture 13
EL2620
Lecture 13
Defuzzifier
16
2012
14
2012
Note that ”mu” is standard fuzzy-logic nomenclature for ”truth value”:
EL2620
y
Fuzzifier
Fuzzy Controller
Fuzzy
Inference
Defuzzifier
Lecture 13
EL2620
19
2012
Lecture 13
EL2620
Lecture 13
Lecture 13
20
2012
18
2012
Example—Fuzzy Control of Steam Engine
EL2620
http://isc.faqs.org/docs/air/ttfuzzy.html
17
2012
Defuzzifier: Map fuzzy set to u
3. Aggregate rule outputs
2. Calculate fuzzy output of each rule
1. Calculate degree of fulfillment for each rule
Fuzzy Inference: Fuzzy set calculations
u
Fuzzy Controller—Summary
Fuzzifier: Fuzzy set evaluation of y (and r )
EL2620
Lecture 13
EL2620
Lecture 13
EL2620
Rule-Based View of Fuzzy Control
23
2012
21
2012
Lecture 13
EL2620
Lecture 13
EL2620
Nonlinear View of Fuzzy Control
24
2012
22
2012
Pros and Cons of Fuzzy Control
2012
Lecture 13
EL2620
Lecture 13
Brain neuron
xn
x1
x2
wn
Σ
b
φ(·)
Artificial neuron
w1
w2
Neurons
y
27
2012
25
Fuzzy control is a way to obtain a class of nonlinear controllers
• Not obvious how to include dynamics in controller
• Sometimes hard to combine with classical control
• Limited analysis and synthesis
Disadvantages
• Intuitive for non-experts in conventional control
• Explicit representation of operator (process) knowledge
• User-friendly way to design nonlinear controllers
Advantages
EL2620
Lecture 13
xn
w2
x2
wn
w1
x1
Σ
b
φ(·)
y =φ b+
Model of a Neuron
Inputs: x1 , x2 , . . . , xn
Weights: w1 , w2 , . . . , wn
Bias: b
Nonlinearity: φ(·)
Output: y
EL2620
Lecture 13
• A network of computing components (neurons)
Neural Networks
• How does the brain work?
EL2620
y
i=1 wi xi
Pn
28
2012
26
2012
A Simple Neural Network
Lecture 13
y
Success Stories
Lecture 13
– Pattern recognition (e.g., speech, vision), data classification
• Applications took off in mid 80’s
31
2012
29
2012
• Complex problems with unknown and highly nonlinear structure
• McCulloch & Pitts (1943), Minsky (1951)
Artificial neural networks:
– Cement kilns, washing machines, vacuum cleaners
• Applications took off in mid 70’s
• Complex problems but with possible linguistic controls
• Zadeh (1965)
Fuzzy controls:
EL2620
Output Layer
Represents a nonlinear mapping from inputs to outputs
u3
u2
u1
Input Layer Hidden Layer
Neural network consisting of six neurons:
EL2620
Neural Network Design
Today’s Goal
Lecture 13
• understand simple neural networks
• understand the basics of fuzzy logic and fuzzy controllers
You should
EL2620
Lecture 13
The choice of weights are often done adaptively through learning
3. How to choose the weights?
2. How many neurons in each layer?
1. How many hidden layers?
EL2620
32
2012
30
2012
Next Lecture
Lecture 13
• PhD thesis projects
• Master thesis projects
• Spring courses in control
• EL2620 Nonlinear Control revisited
EL2620
33
2012
What’s on the exam?
Question 1
• Master thesis projects
• Spring courses in control
• Summary and repetition
Lecture 14
EL2620 Nonlinear Control
Question 2
Lecture 14
Lecture 14
• Is the system generically nonlinear? E.g, ẋ = xu
– saturations, valves etc
• Optimal control
• Nonlinear controllability
• Some nonlinearities to compensate for?
– analyze and simulate with nonlinear model
– varying operating conditions?
– strong nonlinearities (under feedback!)?
• Evaluate:
– linear methods (loop shaping, state feedback, . . . )
• Start with the simplest:
The answer is highly problem dependent. Possible (learning)
approach:
What design method should I use in practice?
EL2620
Lecture 14
• See homepage for old exams
TEFYMA or BETA
(No other material: textbooks, exercises, calculators etc. Any
other basic control book must be approved by me before the
exam.).
• You may bring lecture notes, Glad & Ljung “Reglerteknik”, and
• Sign up on course homepage
Exam
• Regular written exam (in English) with five problems
EL2620
• Exact feedback linearization, input-output linearization, zero dynamics
• Lyapunov based design: back-stepping
• Sliding modes, equivalent controls
• Describing functions
• Compensating static nonlinearities
• Circle Criterion, Small Gain Theorem, Passivity Theorem
• Lyapunov stability (local and global), LaSalle
3
2012
1
2012
• Nonlinear models: equilibria, phase portaits, linearization and stability
EL2620
Lecture 14
EL2620
4
2012
2
2012
Question 3
y
k1 y
− k11
− k12
The Circle Criterion
k2 y f (y)
G(iω)
Lecture 14
If the Nyquist curve of G(s) stays on the correct side of the circle
defined by the points −1/k1 and −1/k2 , then the closed-loop
system is BIBO stable.
k1 ≤ f (y)/y ≤ k2 .
Theorem Consider a feedback loop with y = Gu and
u = −f (y). Assume G(s) is stable and that
EL2620
Lecture 14
of them can be used to prove instability.
7
2012
• Since they do not provide necessary conditions for stability, none
• But, if one method does not prove stability, another one may.
Criterion all provide only sufficient conditions for stability
• No, the Small Gain Theorem, Passivity Theorem and Circle
5
2012
Can a system be proven stable with the Small Gain Theorem and
unstable with the Circle Criterion?
EL2620
Question 4
k1 < 0 < k2 : Stay inside the circle
Lecture 14
Only Case 1 and 2 studied in lectures. Only G stable studied.
8
2012
6
2012
< 0 < k2 ?
0 = k1 < k2 : Stay to the right of the line Re s = −1/k2
0 < k1 < k2 : Stay outside circle
The different cases
Other cases: Multiply f and G with −1.
3.
2.
1.
Stable system G
EL2620
Lecture 14
Can you review the circle criterion? What about k1
EL2620
Please repeat antiwindup
Question 5
G − KD
G
Lecture 14
Choose K such that F
y
y
K
s −1
F − KC
s −1
xc
xc
C
D
C
∑
∑
v
u
sat
− KC has stable eigenvalues.
∑
∑
F
D
u
Antiwindup—General State-Space Model
EL2620
Lecture 14
EL2620
11
2012
9
2012
Lecture 14
EL2620
Lecture 14
EL2620
e
(b)
e
(a)
K
Ti
∑
K
1
Tt
1
s
1
Tt
1
s
∑
v
v
−
−
∑
es
+
u
∑
u
Actuator
es
+
Actuator
Actuator model
∑
Question 6
KTd s
∑
K
KTd s
Please repeat Lyapunov theory
−y
K
Ti
−y
Tracking PID
12
2012
10
2012
⇒
kx(t)k < R, t ≥ 0
> 0 there exists r > 0, such that
⇒
lim x(t) = 0
t→∞
Lecture 14
then x
= 0 is globally asymptotically stable.
• V (x) → ∞ as kxk → ∞
• V̇ (x) < 0 for all x 6= 0
• V (x) > 0, for all x 6= 0
• V (0) = 0
15
2012
13
2012
= 0. Assume that V : R → R
n
Lyapunov Theorem for Global Stability
Theorem Let ẋ = f (x) and f (0)
is a C 1 function. If
EL2620
Lecture 14
globally asymptotically stable, if asymptotically stable for all
x(0) ∈ Rn .
kx(0)k < r
locally asymptotically stable, if locally stable and
kx(0)k < r
locally stable, if for every R
= 0 of ẋ = f (x) is
Stability Definitions
An equilibrium point x
EL2620
= 0 is locally stable. Furthermore, if
= 0 is locally asymptotically stable.
V̇ (x) ≤ 0 for all x
(3)
Lecture 14
then x
= f (x) such that V̇ (x) = 0 is x(t) = 0
16
= 0. If there exists a C1 function
= 0 is globally asymptotically stable.
(5) The only solution of ẋ
for all t
V (x) → ∞ as kxk → ∞
V (x) > 0 for all x 6= 0
(2)
(4)
V (0) = 0
(1)
Theorem: Let ẋ = f (x) and f (0)
V : Rn → R such that
14
2012
LaSalle’s Theorem for Global Asymptotic
Stability
EL2620
Lecture 14
then x
• V̇ (x) < 0 for all x ∈ Ω, x 6= 0
then x
• V̇ (x) ≤ 0 along all trajectories in Ω
• V (x) > 0, for all x ∈ Ω, x 6= 0
• V (0) = 0
2012
∈ Ω ⊂ Rn . Assume that
Lyapunov Theorem for Local Stability
Theorem Let ẋ = f (x), f (0) = 0, and 0
V : Ω → R is a C 1 function. If
EL2620
2012
g
k
ẋ1 = x2 , ẋ2 = − sin x1 − x2
l
m
k
1
g
V (x) = (1 − cos x1 ) + x22 ⇒ V̇ = − x22
l
2
m
Example: Pendulum with friction
2012
17
Lecture 14
• The domain is compact if we consider
Ω = {(x1 , x2 ) ∈ R2 |V (x) ≤ c}
M = {(x1 , x2 )|x1 = kπ, x2 = 0}
19
• The invariant points in E are given by ẋ1 = x2 = 0 and ẋ2 = 0.
Thus, the largest invariant set in E is
• The set E = {(x1 , x2 )|V̇ = 0} is E = {(x1 , x2 )|x2 = 0}
• We can not prove global asymptotic stability; why?
EL2620
Lecture 14
and V is a positive definite function
Ω = {x ∈ Rn |V (x) ≤ c}
Remark : a compact set (bounded and closed) is obtained if we e.g.,
consider
Let V
: Rn → R be a C 1 function such that V̇ (x) ≤ 0 for x ∈ Ω.
Let E be the set of points in Ω where V̇ (x) = 0. If M is the largest
invariant set in E , then every solution with x(0) ∈ Ω approaches M
as t → ∞
ẋ = f (x).
∈ Rn be a bounded and closed set that is invariant
LaSalle’s Invariant Set Theorem
Theorem Let Ω
with respect to
EL2620
2012
Lecture 14
asymptotic stability of the origin.
• If we e.g., consider Ω : x21 + x22 ≤ 1 then
M = {(x1 , x2 )|x1 = 0, x2 = 0} and we have proven
EL2620
Lecture 14
In particular, if the compact region does not contain any fixed point
then the ω -limit set is a limit cycle
20
2012
18
Poincare-Bendixson Any orbit of a continuous 2nd order system that
stays in a compact region of the phase plane approaches its ω -limit
set, which is either a fixed point, a periodic orbit, or several fixed
points connected through homoclinic or heteroclinic orbits
Relation to Poincare-Bendixson Theorem
EL2620
Question 7
ẋ1 = −ax1 (t)
Lecture 14
Choose large a: fast convergence along sliding manifold
σ(x) = ax1 + x2 = 0
⇒
ẋ1 = x2 (t)
ẋ2 = x1 (t)x2 (t) + u(t)
Example
Choose S for desired behavior, e.g.,
EL2620
Lecture 14
3. The equivalent control
2. The sliding control
1. The sliding manifold
There are 3 essential parts you need to understand:
23
2012
21
2012
Please repeat the most important facts about sliding modes.
EL2620
Step 1. The Sliding Manifold S
S of
2012
Step 2. The Sliding Controller
Lecture 14
Example: f (x)
u=−
2012
u = −x1 x2 − x2 − sgn(x1 + x2 )
24
f (x) + x2 + sgn(σ)
g(x)
= x1 x2 , g(x) = 1, σ = x1 + x2 , yields
⇐
= x2 , ẋ2 = f (x) + g(x)u and
= 0.5σ 2 yields V̇ = σ σ̇
V̇ = σ (x2 + f (x) + g(x)u) < 0
For 2nd order system ẋ1
σ = x1 + x2 we get
Lyapunov function V (x)
Use Lyapunov ideas to design u(x) such that S is an attracting
invariant set
EL2620
22
∈ R2 then S is R1 , i.e., a curve in the state-plane (phase plane).
Lecture 14
If x
and make S invariant
S = {x ∈ Rn |σ(x) = 0} σ ∈ R1
Idea: use u to force the system onto a sliding manifold
dimension n − 1 in finite time
ẋ = f (x) + g(x)u, x ∈ Rn , u ∈ R1
Aim: we want to stabilize the equilibrium of the dynamic system
EL2620
2012
⇒
ueq = −x2 − x1 x2
Question 8
Lecture 14
Can you repeat backstepping?
EL2620
Lecture 14
27
2012
25
Thus, the sliding controller will take the system to the sliding manifold
S in finite time, and the equivalent control will keep it on S .
σ̇ = ẋ1 + ẋ2 = x2 + x1 x2 + ueq = 0
Example:
However, an equivalent control ueq (t) that keeps x(t) on S can be
computed from σ̇ = 0 when σ = 0
∈ S , then u will chatter
Step 3. The Equivalent Control
When trajectory reaches sliding mode, i.e., x
(high frequency switching).
EL2620
Note!
Backstepping Design
Lecture 14
In this course we only consider f (x, u) with a special structure,
namely strict feedback structure
V (x) > 0 , V̇ (x) < 0 ∀x ∈ Rn
28
2012
26
2012
General Lyapunov control design: determine a Control Lyapunov
function V (x, u) and determine u(x) so that
ẋ = f (x, u)
We are concerned with finding a stabilizing control u(x) for the
system
EL2620
Lecture 14
This is OK, but is not completely general (see example)
u = −sgn(σ)
Previous years it has often been assumed that the sliding mode
control always is on the form
EL2620
The Backstepping Result
2012
29
2012
Lecture 14
31
dφ
dV
u(x) =
f (x1 )+g(x1 )x2 −
g(x1 )−(xk −φ(x1 ))−f2 (x1 , x2 )
dx1
dx1
with corresponding controller
ẋ1 = f1 (x1 ) + g1 (x1 )x2
ẋ2 = f2 (x1 , x2 ) + u
− φ(x1 ))2 /2 is a Control
= φ(x1 ).
Then V2 (x1 , x2 ) = V1 (x1 ) + (x2
Lyapunov Function for the system
with corresponding controller u
ẋ1 = f1 (x1 ) + g1 (x1 )u
Let V1 (x1 ) be a Control Lyapunov Function for the system
EL2620
6= 0
ẋn = fn (x1 , . . . , xn ) + gn (x1 , . . . , xn )u
..
.
ẋ1 = f1 (x1 ) + g1 (x1 )x2
ẋ2 = f2 (x1 , x2 ) + g2 (x1 , x2 )x3
ẋ3 = f3 (x1 , x2 , x3 ) + g3 (x1 , x2 , x3 )x4
Strict Feedback Systems
x1 , . . . , xk do not depend on xk+2 , . . . , xn .
Lecture 14
Note:
where gk
EL2620
The Backstepping Idea
Lecture 14
EL2620
Lecture 14
Repeat backlash compensation
Question 9
x˙1 = f1 (x1 ) + g1 (x1 )x2
x˙2 = f2 (x1 , x2 ) + u
find a Control Lyapunov function V2 (x1 , x2 ), with corresponding
control u = φ2 (x1 , x2 ), for the system
ẋ1 = f1 (x1 ) + g1 (x1 )u
Given a Control Lyapunov Function V1 (x1 ), with corresponding
control u = φ1 (x1 ), for the system
EL2620
32
2012
30
2012
Backlash Compensation
−
e
1+sT2
K 1+sT
1
1
1 + sT
θ̇in
1
s
Question 10
u
θin
Lecture 14
Can you repeat linearization through high gain feedback?
EL2620
Lecture 14
θref
Linear controller design: Phase lead compensation
• Backlash inverse
• Linear controller design
• Deadzone
EL2620
33
35
2012
θout
2012
-5
without filter
10
-2
-1
0
0
1
5
u, with/without filter
5
y, with/without filter
Oscillation removed!
5
with filter
2
0
0
0.5
1
1.5
10
10
Inverting Nonlinearities
Real Axis
0
Nyquist Diagrams
fˆ−1 (·)
f (·)
Lecture 14
Should be combined with feedback as in the figure!
−
F (s)
Controller
15
15
G(s)
Compensation of static nonlinearity through inversion:
EL2620
Lecture 14
-10
-10
-8
-6
-4
-2
0
2
4
6
8
10
1+sT2
F (s) = K 1+sT
with T1 = 0.5, T2 = 2.0:
1
describing function is removed
20
20
36
2012
34
2012
• Choose compensation F (s) such that the intersection with the
EL2620
Imaginary Axis
−
ê
f (·)
k
s
ê = v − f (u)
u
Lecture 14
EL2620
Lecture 14
What about describing functions?
Question 12
> 0 large and df /du > 0, then ê → 0 and
0 = v − f (u)
⇔
f (u) = v
⇔
If k
v
f (u)
39
2012
u = f −1 (v)
u
37
2012
Remark: How to Obtain f −1 from f using
Feedback
EL2620
What should we know about input–output stability?
Question 11
kyk2
kuk2
n=1
N.L.
u
G(s)
y
a2n + b2n sin[nωt + arctan(an /bn )]
−
e
2012
38
2012
Lecture 14
40
≪ |G(iω)| for n ≥ 2, then n = 1 suffices, so that
q
y(t) ≈ |G(iω)| a21 + b21 sin[ωt + arctan(a1 /b1 ) + arg G(iω)]
If |G(inω)|
u(t) =
∞
X
p
e(t) = A sin ωt gives
r
Idea Behind Describing Function Method
EL2620
Lecture 14
• Passivity Theorem
• Circle Criterion
• Small Gain Theorem
• BIBO stability
• System gain γ(S) = supu∈L2
You should understand and be able to derive/apply
EL2620
N.L.
e(t)
N (A, ω)
u
b1 (t)
More Courses in Control
Lecture 14
• EL2421 Project Course in Automatic Control, per 2
• EL1820 Modelling of Dynamic Systems, per 1
• EL2520 Control Theory and Practice, Advanced Course, per 4
• EL2745 Principles of Wireless Sensor Networks, per 3
• EL2450 Hybrid and Embedded Control Systems, per 3
EL2620
Lecture 14
= 0 then
u(t)
u
b1 (t) = |N (A, ω)|A sin[ωt + arg N (A, ω)] ≈ u(t)
If G is low pass and a0
e(t)
N (A, ω) =
b1 (ω) + ia1 (ω)
A
Definition of Describing Function
The describing function is
EL2620
43
2012
41
2012
y
⇒
A
G(iω) = −
G(iω)
−1/N (A)
y = G(iω)u = −G(iω)N (A)y
e
f (·) u G(s)
−
Existence of Periodic Solutions
EL2520 Control Theory and Practice,
Advanced Course
1
N (A)
Lecture 14
Contact: Mikael Johansson [email protected]
• Lectures, exercises, labs, computer exercises
– Real time optimization: Model Predictive Control (MPC)
– Design of multivariable controllers: LQG, H∞ -optimization
– Robustness and performance
– Linear multivariable systems
• Multivariable control:
• Period 4, 7.5 p
44
2012
42
2012
Aim: provide an introduction to principles and methods in advanced
control, especially multivariable feedback systems.
EL2620
Lecture 14
The intersections of the curves G(iω) and −1/N (A)
give ω and A for a possible periodic solution.
0
replacements
EL2620
EL2450 Hybrid and Embedded Control
Systems
EL1820 Modelling of Dynamic Systems
Lecture 14
Contact: Håkan Hjalmarsson, [email protected]
• Lectures, exercises, labs, computer exercises
• Computer tools for modeling, identification, and simulation
– experiments: parametric identification, frequency response
– physics: lagrangian mechanics, electrical circuits etc
• Model dynamical systems from
• Period 1, 6 p
Aim: teach how to systematically build mathematical models of
technical systems from physical laws and from measured signals.
EL2620
Lecture 14
Contact: Dimos Dimarogonas [email protected]
• Lectures, exercises, homework, computer exercises
– Control over communication networks
– Scheduling of real-time software
– Computer-implementation of control algorithms
• How is control implemented in reality:
• Period 3, 7.5 p
Aim:course on analysis, design and implementation of control
algorithms in networked and embedded systems.
EL2620
47
2012
45
2012
EL2745 Principles of Wireless Sensor
Networks
– Research topics in WSNs
– Design of practical WSNs
Lecture 14
Lecture 14
Contact: Jonas Mårtensson [email protected]
• No regular lectures or labs
• Project management (lecturers from industry)
• Preparation for Master thesis project
• Team work
• “From start to goal...”: apply the theory from other courses
• Period 4, 12 p
46
48
2012
Aim: provide practical knowledge about modeling, analysis, design,
and implementation of control systems. Give some experience in
project management and presentation.
EL2421 Project Course in Control
Contact: Carlo Fischione [email protected]
EL2620
2012
– Essential tools within communication, control, optimization and
signal processing needed to cope with WSN
• THE INTERNET OF THINGS
• Period 3, 7.5 cr
Aim: provide the participants with a basic knowledge of wireless
sensor networks (WSN)
EL2620
Lecture 14
• Check old projects
• Discuss with professors, lecturers, PhD and MS students
• The topic and the results of your thesis are up to you
Hints:
◦ Get insight in research and development
◦ Collaboration with leading industry and universities
◦ The research edge
◦ Cross-disciplinary
◦ Theory and practice
49
2012
Doing Master Thesis Project at Automatic
Control Lab
EL2620
Doing PhD Thesis Project at Automatic
Control
Lecture 14
• 4-5 yr’s to PhD (lic after 2-3 yr’s)
teaching (20%), fun (100%)
• Research (50%), courses (30%),
• World-wide job market
• Competitive
• International collaborations and travel
• Get paid for studying
• Intellectual stimuli
EL2620
50
2012
Download