3. Fundamentals of Lyapunov Theory - HCMUT

advertisement
Applied Nonlinear Control
Nguyen Tan Tien - 2002.3
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
3. Fundamentals of Lyapunov Theory
The objective of this chapter is to present Lyapunov stability
theorem and illustrate its use in the analysis and the design of
nonlinear systems.
3.1 Nonlinear Systems and Equilibrium Points
Nonlinear systems
A nonlinear dynamic system can usually be presented by the
set of nonlinear differential equations in the form
x& = f (x, t )
has a single equilibrium point (the origin 0) if A is nonsingular.
If A is singular, it has an infinity of equilibrium points, which
contained in the null-space of the matrix A, i.e., the subspace
defined by Ax = 0. A nonlinear system can have several (or
infinitely many) isolated equilibrium points.
Example 3.1 The pendulum___________________________
(3.1)
R
θ
where
f ∈R
n
x ∈ Rn
n
: nonlinear vector function
Fig. 3.1 Pendulum
: state vectors
: order of the system
The form (3.1) can represent both closed-loop dynamics of a
feedback control system and the dynamic systems where no
control signals are involved.
A special class of nonlinear systems is linear system. The
dynamics of linear systems are of the from x& = A(t ) x with
A ∈ R n×n .
Autonomous and non-autonomous systems
Linear systems are classified as either time-varying or timeinvariant. For nonlinear systems, these adjectives are replaced
by autonomous and non-autonomous.
Definition 3.1 The nonlinear system (3.1) is said to be
autonomous if f does not depend explicitly on time, i.e., if the
system’s state equation can be written
x& = f (x)
(3.2)
Otherwise, the system is called non-autonomous.
Equilibrium points
It is possible for a system trajectory to only a single point.
Such a point is called an equilibrium point. As we shall see
later, many stability problems are naturally formulated with
respect to equilibrium points.
*
Consider the pendulum of Fig. 3.1, whose dynamics is given
by the following nonlinear autonomous equation
MR 2θ&& + bθ& + MgR sin θ = 0
(3.5)
where R is the pendulum’s length, M its mass, b the friction
coefficient at the hinge, and g the gravity constant. Leting
x1 = θ , x2 = θ& , the corresponding state-space equation is
x&1 = x2
(3.6a)
b
g
x& 2 = −
x − sin x1
2 2
R
MR
(3.6b)
Therefore the equilibrium points are given by x2 = 0,
sin( x1 ) = 0, which leads to the points (0 [2π ], 0) and
(π [2π ], 0) . Physically, these points correspond to the
pendulum resting exactly at the vertical up and down points.
__________________________________________________________________________________________
In linear system analysis and design, for notational and
analytical simplicity, we often transform the linear system
equations in such a way that the equilibrium point is the origin
of the state-space.
Nominal motion
Let x* (t ) be the solution of x& = f (x) , i.e., the nominal motion
trajectory, corresponding to initial condition x* (0) = x 0 . Let
equilibrium points) of the system if once x(t ) is equal to x * , it
us now perturb the initial condition to be x(0) = x 0 + δ x 0 , and
study the associated variation of the motion error
remains equal to x * for all future time.
e(t ) = x(t ) − x* (t ) as illustrated in Fig. 3.2.
Definition 3.2
A state x is an equilibrium state (or
x2
Mathematically, this means that the constant vector x *
satisfies
*
0 = f (x )
e(t )
x* (t )
(3.3)
x1
Equilibrium points can be found using (3.3).
A linear time-invariant system
x& = A x
x(t )
xn
(3.4)
Fig. 3.2 Nominal and perturbed motions
___________________________________________________________________________________________________________
7
Chapter 3 Fundamentals of Lyapunov Theory
Applied Nonlinear Control
Nguyen Tan Tien - 2002.3
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
Since both x * (t ) and x(t ) are solutions of (3.2): x& = f (x) , we
have
x * (0) = x 0
x& * = f (x * )
x& = f (x)
x ( 0) = x 0 + δ x 0
then e(t ) satisfies the following non-autonomous differential
equation
e& = f (x * + e, t ) − f (x * , t ) = g (e, t )
(3.8)
with initial condition e(0) = δ x(0) . Since g (0, t ) = 0 , the new
dynamic system, with e as state and g in place of f, has an
equilibrium point at the origin of the state space. Therefore,
instead of studying the deviation of x(t ) from x * (t ) for the
original system, we may simply study the stability of the
perturbation dynamics (3.8) with respect to the equilibrium
point 0. However, the perturbation dynamics non-autonomous
system, due to the presence of the nominal trajectory x * (t ) on
the right hand side.
Example 3.2________________________________________
Consider the autonomous mass-spring system
m &x& + k1 x& + k 2 x 3 = 0
which contains a nonlinear term reflecting the hardening effect
of the spring. Let us study the stability of the motion
x* (t ) which starts from initial point x0 . Assume that we
slightly perturb the initial position to be x(0) = x0 + δ x0 . The
resulting system trajectory is denoted as x(t ) . Proceeding as
before, the equivalent differential equation governing the
motion error e is
m &e& + k1e + k 2 [e 3 + 3e 2 x * (t ) + 3e x *2 (t )] = 0
Clearly, this is a non-autonomous system.
__________________________________________________________________________________________
3.2 Concepts of Stability
Notation
B R : spherical region (or ball) defined by x ≤ R
S R : spherical itself defined by x = R
∀
∃
∈
⇒
⇔
: for any
: there exist
: in the set
: implies that
: equivalent
Stability and instability
Definition 3.3 The equilibrium state x = 0 is said to be stable
if, for any R > 0 , there exist r > 0 , such that if x(0) ≤ r then
∀R > 0, ∃r > 0, x(0) < r ⇒ ∀t ≥ 0, x(t ) < R
or, equivalently
∀R > 0, ∃r > 0, x(0) ∈ B r
⇒ ∀t ≥ 0, x(t ) ∈ B r
Essentially, stability (also called stability in the sense of
Lyapunov, or Lyapunov stability) means that the system
trajectory can be kept arbitrarily close to the origin by starting
sufficiently close to it. More formally, the definition states that
the origin is stable, if, given that we do not want the state
trajectory x(t ) to get out of a ball of arbitrarily specified radius
B R . The geometrical implication of stability is indicated in
Fig. 33.
curve 1 - asymptotically stable
3
1
curve 2 - marginally stable
2
curve 3 - unstable
0
x(0) S r
SR
Fig. 3.3 Concepts of stability
Asymptotic stability and exponential stability
In many engineering applications, Lyapunov stability is not
enough. For example, when a satellite’s attitude is disturbed
from its nominal position, we not only want the satellite to
maintain its attitude in a range determined by the magnitude of
the disturbance, i.e., Lyapunov stability, but also required that
the attitude gradually go back to its original value. This type
of engineering requirement is captured by the concept of
asymptotic stability.
Definition 3.4 An equilibrium points 0 is asymptotically stable
if it is stable, and if in addition there exist some r > 0 such
that x(0) ≤ r implies that x(t ) → 0 as t → ∞ .
Asymptotic stability means that the equilibrium is stable, and
in addition, states start close to 0 actually converge to 0 as
time goes to infinity. Fig. 3.3 shows that the system
trajectories starting form within the ball B r converge to the
origin. The ball B r is called a domain of attraction of the
equilibrium point.
In many engineering applications, it is still not sufficient to
know that a system will converge to the equilibrium point
after infinite time. There is a need to estimate how fast the
system trajectory approaches 0. The concept of exponential
stability can be used for this purpose.
Definition 3.5 An equilibrium points 0 is exponential stable if
there exist two strictly positive number α and λ such that
∀t > 0, x(t ) ≤ α x(0) e −λt
(3.9)
x(t ) ≤ R for all t ≥ 0 . Otherwise, the equilibrium point is
unstable.
in some ball B r around the origin.
Using the above symbols, Definition 3.3 can be written in the
(3.9) means that the state vector of an exponentially stable
form
system converges to the origin faster than an exponential
___________________________________________________________________________________________________________
8
Chapter 3 Fundamentals of Lyapunov Theory
Applied Nonlinear Control
Nguyen Tan Tien - 2002.3
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
function. The positive number λ is called the rate of
exponential convergence.
A similar procedure can be applied for a controlled system.
Consider the system &x& + 4 x& 5 + ( x 2 + 1) u = 0 . The system can
For example, the system x& = −(1 + sin 2 x) x is exponentially
convergent to x = 0 with the rate λ = 1 . Indeed, its solution is
be linearly approximated about x = 0 as &x& + 0 + (0 + 1) u = 0 or
&x& = u . Assume that the control law for the original nonlinear
x(t ) = x(0) e
− ∫0t −[1+sin 2 x (τ )] dτ
, and therefore x(t ) ≤ x(0) e −t .
Note that exponential stability implies asymptotic stability.
But asymptotic stability does not implies guarantee
exponential stability, as can be seen from the system
x& = − x 2 , x(0) = 1
Local and global stability
Definition 3.6 If asymptotic (or exponential) stability holds
for any initial states, the equilibrium point is said to be
asymptotically (or exponentially) stable in the large. It is also
called globally asymptotically (or exponentially) stable.
3.3 Linearization and Local Stability
Lyapunov’s linearization method is concerned with the local
stability of a nonlinear system. It is a formalization of the
intuition that a nonlinear system should behave similarly to its
linearized approximation for small range motions.
Consider the autonomous system in (3.2), and assumed that
f(x) is continuously differentiable. Then the system dynamics
can be written as
(3.11)
is called the linearization (or linear approximation) of the
original system at the equilibrium point 0.
In practice, finding a system’s linearization is often most
easily done simply neglecting any term of order higher than 1
in the dynamics, as we now illustrate.
Example 3.4________________________________________
Its linearized approximation about x = 0 is
x&1 = 0 + x1.1
x& 2 = x 2 + 0 + x1 + x1 x2 ≈ x2 + x1
• If the linearized system is strictly stable (i.e., if all
eigenvalues of A are strictly in the left-half complex plane),
then the equilibrium point is asymptotically stable (for the
actual nonlinear system).
• If the linearizad system is un stable (i.e., if at least one
eigenvalue of A is strictly in the right-half complex plane),
then the equilibrium point is unstablle (for the nonlinear
system).
• If the linearized system is marginally stable (i.e., if all
eigenvalues of A are in the left-half complex plane but at
least one of them is on the jω axis), then one cannot
conclude anything from the linear approximation (the
equilibrium point may be stable, asymptotically stable, or
unstable for the nonlinear system).
Example 3.5________________________________________
Consider the equilibrium point (θ = π ,θ& = 0) of the pendulum
where f h.o.t . stands for higher-order terms in x. Let us use the
constant matrix A denote the Jacobian matrix of f with respect
 ∂f 
 . Then, the system
to x at x = 0: A = 
 ∂ x  x =0
x& = A x
(3.12)
x&1 = x22 + x1 cos x 2
x& 2 = x 2 + ( x1 + 1) x1 + x1 sin x 2
The following result makes precise the relationship between
the stability of the linear system (3.2) and that of the original
nonlinear system (3.2).
Theorem 3.1 (Lyapunov’s linearization method)
exponential function e − λt .
Consider the nonlinear system
__________________________________________________________________________________________
(3.10)
whose solution is x = 1 /(1 + t ) , a function slower than any
 ∂f 
 x + f h.o.t . (x)
x& = 
 ∂ x  x =0
system has been selected to be u = sin x + x 3 + x cos 2 x , then
the linearized closed-loop dynamics is &x& + x& + x = 0 .
in the example 3.1. Since the neighborhood of θ = π , we can
write
sin θ = sin π + cos π (θ − π ) + h.o.t. = π − θ + h.o.t.
~
thus letting θ = θ − π , the system’s linearization about the
equilibrium point (θ = π ,θ& = 0) is
~
&&
θ +
~& g ~
θ − θ =0
R
MR
b
2
Hence its linear approximation is unstable, and therefore so is
the nonlinear system at this equilibrium point.
__________________________________________________________________________________________
Example 3.5________________________________________
Consider the first-order system x& = a x + b x 5 . The origin 0 is
one of the two equilibrium of this system. The linearization of
this system around the origin is x& = a x . The application of
Lyapunov’s linearization method indicate the following
stability properties of the nonlinear system
• a < 0 : asymptotically stable
• a > 0 : unstable
• a = 0 : cannot tell from the linearization
In the third case, the nonlinear system is x& = b x 5 . The
linearization method fails while, as we shall see, the direct
method to be described can easily solve this problem.
1 0
The linearized system can thus be written x& = 
x .
__________________________________________________________________________________________
1 1
___________________________________________________________________________________________________________
9
Chapter 3 Fundamentals of Lyapunov Theory
Applied Nonlinear Control
Nguyen Tan Tien - 2002.3
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
3.4 Lyapunov’s Direct Method
The basic philosophy of Lyapunov’s direct method is the
mathematical extension of a fundamental physical
observation: if the total energy of a mechanical (or electrical)
system is continuous dissipated, then the system, whether
linear or nonlinear, must eventually settle down to an
equilibrium point. Thus, we may conclude the stability of a
system by examining the variation of a single scalar function.
Let consider the nonlinear mass-damper-spring system in Fig.
3.6, whose dynamic equation is
m &x& + b x& x& + k 0 x + k1 x 3 = 0
with
b
x&
x&
(3.13)
x≠0
⇒
V ( x) > 0
If V (0) = 0 and the above property holds over the whole state
space, then V (x) is said to be globally positive definite.
1
MR 2 x22 + MR(1 − cos x1 )
2
which is the mechanical energy of the pendulum in Example
3.1, is locally positive definite.
For instance, the function V (x) =
Let us describe the geometrical meaning of locally positive
definite functions. Consider a positive definite function
V (x) of two state variables x1 and x2 . In 3-dimensional space,
: nonlinear dissipation or damping
k 0 x + k1 x 3 : nonlinear spring term
V (x) typically corresponds to a surface looking like an
upward cup as shown in Fig. 3.7. The lowest point of the cup
is located at the origin.
nonlinear spring
and damper
3.4.1. Positive definite functions and Lyapunov functions
Definition 3.7 A scalar continuous function V (x) is said to be
locally positive definite if V (0) = 0 and, in a ball B R0
m
V = V3
V
V = V2
Fig. 3.6 A nonlinear mass-damper-spring system
V = V1
Total mechanical energy = kinetic energy + potential energy
1 2
mx& +
2
x
1
1
k 0 x 2 + k1 x 4
2
4
0
(3.14)
Comparing the definitions of stability and mechanical energy,
we can see some relations between the mechanical energy and
the concepts described earlier:
V ( x) =
1
∫ (k x + k x )dx = 2 mx&
0
1
3
2
+
• zero energy corresponds to the equilibrium point
(x = 0, x& = 0)
• assymptotic stability implies the convergence of
mechanical energy to zero
• instability is related to the growth of mechanical energy
The relations indicate that the value of a scalar quantity, the
mechanical energy, indirectly reflects the magnitude of the
state vector, and furthermore, that the stability properties of
the system can be characterized by the variation of the
mechanical energy of the system.
The rate of energy variation during the system’s motion is
obtained by differentiating the first equality in (3.14) and
using (3.13)
V& (x) = m x& &x& + (k 0 x + k1 x 3 ) x& = x& (−b x& x& ) = −b x&
3
(3.15)
(3.15) implies that the energy of the system, starting from
some initial value, is continuously dissipated by the damper
until the mass is settled down, i.e., x& = 0 .
The direct method of Lyapunov is based on generalization of
the concepts in the above mass-spring-damper system to more
complex systems.
x2
0
x1
V3 > V2 > V1
Fig. 3.7 Typical shape of a positive definite function V ( x1 , x 2 )
The 2-dimesional geometrical representation can be made as
follows. Taking x1 and x2 as Cartesian coordinates, the level
curves V ( x1 , x 2 ) = Vα typically present a set of ovals
surrounding the origin, with each oval corresponding to a
positive value of Vα .These ovals often called contour curves
may be thought as the section of the cup by horizontal planes,
projected on the ( x1 , x 2 ) plane as shown in Fig. 3.8.
V = V2
x2
V = V1
x1
0
V = V3
V3 > V2 > V1
Fig. 3.8 Interpreting positive definite functions using contour
curves
Definition 3.8 If, in a ball B R0 , the function V (x) is positive
definite and has continuous partial derivatives, and if its time
derivative along any state trajectory of system (3.2) is
negative semi-definite, i.e., V& (x) ≤ 0 then, V (x) is said to be a
Lyapunov function for the system (3.2).
___________________________________________________________________________________________________________
10
Chapter 3 Fundamentals of Lyapunov Theory
Applied Nonlinear Control
Nguyen Tan Tien - 2002.3
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
A Lyapunov function can be given simple geometrical
interpretations. In Fig. 3.9, the point denoting the value of
V ( x1 , x2 ) is seen always point down an inverted cup. In Fig.
3.10, the state point is seen to move across contour curves
corresponding to lower and lower value of V .
V
x2
x1
V& (x) = θ& sin θ + θ&θ&& = −θ& 2 ≤ 0
Therefore, by involving the above theorem, we can conclude
that the origin is a stable equilibrium point. In fact, using
physical meaning, we can see the reason why V& (x) ≤ 0 ,
namely that the damping term absorbs energy. Actually,
V& (x) is precisely the power dissipated in the pendulum.
However, with this Lyapunov function, we cannot draw
conclusion on the asymptotic stability of the system,
because V& (x) is only negative semi-definite.
V
0
Obviously, this function is locally positive definite. As a mater
of fact, this function represents the total energy of the
pendulum, composed of the sum of the potential energy and
the kinetic energy. Its time derivative yields
__________________________________________________________________________________________
x(t )
Example 3.8 Asymptotic stability_______________________
Fig. 3.9 Illustrating Definition 3.8 for n=2
V = V2
V = V1
x2
Let us study the stability of the nonlinear system defined by
x&1 = x1 ( x12 + x 22 − 2) − 4 x1 x 22
x& 2 = 4 x12 x2 + x 2 ( x12 + x22 − 2)
x1
around its equilibrium point at the origin.
0
V = V3
V3 > V2 > V1
Fig. 3.10 Illustrating Definition 3.8 for n=2 using contour
curves
3.4.2 Equilibrium point theorems
Lyapunov’s theorem for local stability
Theorem 3.2 (Local stability) If, in a ball B R0 , there exists a
scalar function V (x) with continuous first partial derivatives
such that
• V (x) is positive definite (locally in B R0 )
• V& (x) is negative semi-definite (locally in B R0 )
then the equilibrium point 0 is stable. If, actually, the
derivative V& (x) is locally negative definite in B R0 , then the
stability is asymptotic.
V ( x1 , x 2 ) = x12 + x22
its derivative V& along any system trajectory is
V& = 2( x12 + x 22 )( x12 + x 22 − 2)
Thus, is locally negative definite in the 2-dimensional ball B 2 ,
i.e., in the region defined by ( x12 + x22 ) < 2 . Therefore, the
above theorem indicates that the origin is asymptotically
stable.
__________________________________________________________________________________________
Lyapunov theorem for global stability
Theorem 3.3 (Global Stability) Assume that there exists a
scalar function V of the state x, with continuous first order
derivatives such that
• V (x) is positive definite
• V& (x) is negative definite
In applying the above theorem for analysis of a nonlinear
system, we must go through two steps: choosing a positive
Lyapunov function, and then determining its derivative along
the path of the nonlinear systems.
then the equilibrium at the origin is globally asymptotically
stable.
Example 3.7 Local stability___________________________
Example 3.9 A class of first-order systems_______________
A simple pendulum with viscous damping is described as
Consider the nonlinear system
θ&& + θ& + sin θ = 0
x& + c( x) = 0
Consider the following scalar function
where c is any continuous function of the same sign as its
scalar argument x , i.e., such as x c( x) > 0 ∀x ≠ 0 . Intuitively,
1
V (x) = (1 − cosθ ) + θ& 2
2
this condition indicates that − c(x ) ’pushes’ the system back
• V (x) → ∞ as x → ∞
towards its rest position x = 0 , but is otherwise arbitrary.
___________________________________________________________________________________________________________
11
Chapter 3 Fundamentals of Lyapunov Theory
Applied Nonlinear Control
Nguyen Tan Tien - 2002.3
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
Since c is continuous, it also implies that c(0) = 0 (Fig. 3.13).
Consider as the Lyapunov function candidate the square of
distance to the origin V = x 2 . The function V is radially
unbounded, since it tends to infinity as x → ∞ . Its derivative
is V& = 2 x x& = −2 x c ( x) . Thus V& < 0 as long as x ≠ 0 , so
that x = 0 is a globally asymptotically stable equilibrium point.
c (x )
0
x
Local invariant set theorem
The invariant set theorem reflect the intuition that the decrease
of a Lyapunov function V has to graduate vanish (i.e., ) V&
has to converge to zero) because V is lower bounded. A
precise statement of this result is as follows.
Theorem 3.4 (Local Invariant Set Theorem) Consider an
autonomous system of the form (3.2), with f continuous, and
let V (x) be a scalar function with continuous first partial
derivatives. Assume that
• for some l > 0 , the region Ω l defined by V (x) < l is
bounded
• V& (x) ≤ 0 for all x in Ω l
Let R be the set of all points within Ω where V& (x) = 0 , and
l
Fig. 3.13 The function c(x )
For instance, the system x& = sin 2 x − x is globally convergent
to x = 0 , since for x ≠ 0 , sin 2 x ≤ sin x ≤ x . Similarly, the
system x& = − x 3 is globally asymptotically convergent to x = 0 .
Notice that while this system’s linear approximation ( x& ≈ 0) is
inconclusive, even about local stability, the actual nonlinear
system enjoys a strong stability property (global asymptotic
stability).
__________________________________________________________________________________________
Example 3 .10______________________________________
Consider the nonlinear system
M be the largest invariant set in R. Then, every solution x(t )
originating in Ω l tends to M as t → ∞ .
⊗ Note that:
- M is the union of all invariant sets (e.g., equilibrium
points or limit cycles) within R
- In particular, if the set R is itself invariant (i.e., if once
V& = 0 , then ≡ 0 for all future time), then M=R
The geometrical meaning of the theorem is illustrated in Fig.
3.14, where a trajectory starting from within the bounded
region Ω l is seen to converge to the largest invariant set M.
Note that the set R is not necessarily connected, nor is the set
M.
The asymptotic stability result in the local Lyapunov theorem
can be viewed a special case of the above invariant set
theorem, where the set M consists only of the origin.
x&1 = x2 − x1 ( x12 + x22 )
x& 2 = − x1 − x 2 ( x12 + x22 )
V =l
V
The origin of the state-space is an equilibrium point for this
Ωl
system. Let V be the positive definite function V = x12 + x22 .
Its derivative along any system trajectory is V& = −2( x12 + x 22 ) 2
which is negative definite. Therefore, the origin is a globally
asymptotically stable equilibrium point. Note that the
globalness of this stability result also implies that the origin is
the only equilibrium point of the system.
__________________________________________________________________________________________
⊗ Note that:
- Many Lyapunov function may exist for the same system.
- For a given system, specific choices of Lyapunov
functions may yield more precise results than others.
- Along the same line, the theorems in Lyapunov analysis
are all sufficiency theorems. If for a particular choice of
Lyapunov function candidate V , the condition on V& are
not met, we cannot draw any conclusions on the stability
or instability of the system – the only conclusion we
should draw is that a different Lyapunov function
candidate should be tried.
3.4.3 Invariant set theorem
Definition 3.9 A set G is an invariant set for a dynamic system
if every system trajectory which starts from a point in G
remains in G for all future time.
R
M
x0
x2
x1
Fig. 3.14 Convergence to the largest invariant set M
Let us illustrate applications of the invariant set theorem using
some examples.
Example 3 .11______________________________________
Asymptotic stability of the mass-damper-spring system
For the system (3.13), we can only draw conclusion of
marginal stability using the energy function (3.14) in the local
equilibrium point theorem, because V& is only negative semidefinite according to (3.15). Using the invariant set theorem,
however, we can show that the system is actually
asymptotically stable. TO do this, we only have to show that
the set M contains only one point.
___________________________________________________________________________________________________________
12
Chapter 3 Fundamentals of Lyapunov Theory
Applied Nonlinear Control
Nguyen Tan Tien - 2002.3
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
The set R defined by x& = 0 , i.e., the collection of states with
zero velocity, or the whole horizontal axis in the phase
plane ( x, x& ) . Let us show that the largest invariant set M in
this set R contains only the origin. Assume that M contains a
point with a non-zero position x1 , then, the acceleration at that
point is &x& = −(k 0 / m) x − (k1 / m) x 3 ≠ 0 . This implies that the
trajectory will immediately move out of the set R and thus
also out of the set M, a contradiction to the definition.
x2
limit cycle
0
x1
__________________________________________________________________________________________
Example 3 .12 Domain of attraction____________________
Consider again the system in Example 3.8. For l = 1 , the
region Ω l , defined by V ( x1 , x 2 ) = x12 + x22 < 1 , is bounded.
The set R is simply the origin 0, which is an invariant set
(since it is an equilibrium point). All the conditions of the
local invariant set theorem are satisfied and, therefore, any
trajectory starting within the circle converges to the origin.
Thus, a domain of attraction is explicitly determined by the
invariant set theorem.
__________________________________________________________________________________________
Example 3 .13 Attractive limit cycle_____________________
Consider again the system
x&1 = x2 − x17 ( x14 + 2 x 22 − 10)
x& 2 =
− x13
− 3 x 25 ( x14
+ 2 x 22
− 10)
Note that the set defined by x14 + 2 x22 = 10 is invariant, since
d 4
( x1 + 2 x 22 − 10) = −(4 x110 + 12 x 26 )( x14 + 2 x22 − 10)
dt
which is zero on the set. The motion on this invariant set is
described (equivalently) by either of the equations
x&1 = x 2
Fig. 3.15 Convergence to a limit circle
Moreover, the equilibrium point at the origin can actually be
shown to be unstable. Any state trajectory starting from the
region within the limit cycle, excluding the origin, actually
converges to the limit cycle.
__________________________________________________________________________________________
Example 3.11 actually represents a very common application
of the invariant set theorem: conclude asymptotic stability of
an equilibrium point for systems with negative semi-definite V& .
The following corollary of the invariant set theorem is more
specifically tailored to such applications.
Corollary: Consider the autonomous system (3.2), with f
continuous, and let V (x) be a scalar function with continuous
partial derivatives. Assume that in a certain neighborhood Ω
of the origin
• is locally positive definite
• V& (x) is negative semi-definite
• the set R defined by V& (x) = 0 contains no trajectories of
(3.2) other than the trivial trajectory x ≡ 0
Then, the equilibrium point 0 is asymptotically stable.
Furthermore, the largest connected region of the form
(defined by V (x) < l ) within Ω is a domain of attraction of the
equilibrium point.
x& 2 = − x13
Indeed, the largest invariant set M in R then contains only the
equilibrium point 0.
Therefore, we see that the invariant set actually represents a
limit circle, along which the state vector moves clockwise. Is
this limit circle actually attractive ? Let us define a Luapunov
⊗ Note that:
- The above corollary replaces the negative definiteness
condition on V& in Lyapunov’s local asymptotic stability
theorem by a negative semi-definiteness condition on V& ,
function candidate V = ( x14 + 2 x22 − 10) 2 which represents a
measure of the “distance” to the limit circle. For any arbitrary
positive number l , the region Ω l , which surrounds the limit
circle, is bounded. Its derivative
V& = −8( x110 + 3x 26 )( x14 + 2 x22 − 10) 2
Thus V& is strictly negative, except if x14 + 2 x 22 = 10 or
x110 + 3 x 26 = 0 , in which cases V& = 0 . The first equation is
simply that defining the limit cycle, while the second equation
is verified only at the origin. Since both the limit circle and the
origin are invariant sets, the set M simply consists of their
union.
Thus, all system trajectories starting in Ω l converge either to
the limit cycle or the origin (Fig. 3.15)
combined with a third condition on the trajectories within
R.
- The largest connected region of the form Ω l within Ω is a
domain of attraction of the equilibrium point, but not
necessarily the whole domain of attraction, because the
function V is not unique.
- The set Ω itself is not necessarily a domain of attraction.
Actually, the above theorem does not guarantee that Ω is
invariant: some trajectories starting in Ω but outside of
the largest Ω l may actually end up outside Ω .
Global invariant set theorem
The above invariant set theorem and its corollary can be
simply extended to a global result, by enlarging the involved
region to be the whole space and requiring the radial
unboundedness of the scalar function V .
___________________________________________________________________________________________________________
13
Chapter 3 Fundamentals of Lyapunov Theory
Applied Nonlinear Control
Nguyen Tan Tien - 2002.3
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
Theorem 3.5 (Global Invariant Set Theorem) Consider an
autonomous system of the form (3.2), with f continuous, and
let V (x) be a scalar function with continuous first partial
derivatives. Assume that
• V& (x) ≤ 0 over the whole state space
• V (x) → ∞ as x → ∞
Let R be the set of all points where V& (x) = 0 , and M be the
largest invariant set in R. Then all solutions globally
asymptotically converge to M as t → ∞
For instance, the above theorem shows that the limit cycle
convergence in Example 3.13 is actually global: all system
trajectories converge to the limit cycle (unless they start
exactly at the origin, which is an unstable equilibrium point).
Because of the importance of this theorem, let us present an
additional (and very useful) example.
Example 3 .14 A class of second-order nonlinear systems___
Consider a second-order system of the form
&x& + b( x& ) + c( x) = 0
c (x )
b(x& )
0
0
x&
x
Fig. 3.17 The functions b(x& ) and c(x )
∫
x
Furthermore, if the integral c(r )dr is unbounded as x → ∞ ,
0
then V is a radially unbounded function and the equilibrium
point at the origin is globally asymptotically stable, according
to the global invariant set theorem.
__________________________________________________________________________________________
where b and c are continuous functions verifying the sign
conditions x& b( x& ) > 0 for x& ≠ 0 and x c( x& ) > 0 for x ≠ 0 . The
dynamics of a mass-damper-spring system with nonlinear
damper and spring can be described by the equation of this
form, with the above sign conditions simply indicating that the
otherwise arbitrary function b and c actually present
“damping” and “spring” effects. A nonlinear R-L-C (resistorinductor-capacitor) electrical circuit can also be represented by
the above dynamic equation (Fig. 3.16)
vC = c (x )
as long as x ≠ 0 . Thus the system cannot get “stuck” at an
equilibrium value other than x = 0 ; in other words, with R
being the set defined by x& = 0 , the largest invariant set M in R
contains only one point, namely [ x = 0, x& = 0] . Use of the
local invariant set theorem indicates that the origin is a locally
asymptotically stable point.
v L = &x&
v R = b(x& )
Fig. 3.16 A nonlinear R-L-C circuit
Note that if the function b and c are actually linear
(b( x& ) = α1 x& , c( x) = α x ) , the above sign conditions are simply
the necessary and sufficient conditions for the system’s
stability (since they are equivalent to the conditions
α1 > 0,α 0 > 0 ).
Together with the continuity assumptions, the sign conditions
b and c are simply that b(0) = 0 and c = 0 (Fig. 3.17). A
positive
definite
function
for
this
system
is
x
1 2
V = x& + c( y ) dy , which can be thought of as the sum of
2
0
the kinetic and potential energy of the system. Differentiating
V , we obtain
∫
V& = x& &x& + c( x) x& = − x& b( x& ) − x& c( x) + c( x) x& = − x& b( x& ) ≤ 0
which can be thought of as representing the power dissipated
in the system. Furthermore, by hypothesis, x& b( x& ) = 0 only if
x& = 0 . Now x& = 0 implies that &x& = −c (x) , which is non-zero
Example 3 .15 Multimodal Lyapunov Function___________
Consider the system
&x& + x 2 − 1 x& 3 + x = sin
πx
2
π y
 dy .
2 
This function has two minima, at x = ±1, x& = 0 , and a local
maximum in x (a saddle point in the state-space) at
x = 0, x& = 0 . Its derivative V& = − x 2 − 1 x& 4 , i.e., the virtual
Chose the Lyapunov function V =
1 2
x& +
2
x
∫  y − sin
0
power “dissipated” by the system. Now V& = 0 ⇒ x& = 0 or
x = ±1 . Let us consider each of cases:
πx
x& = 0
⇒ &x& = sin
− x ≠ 0 except if x = 0 or x = ±1
2
x = ±1 ⇒ &x& = 0
Thus the invariant set theorem indicates that the system
converges globally to or ( x = −1, x& = 0) . The first two of these
equilibrium points are stable, since they correspond to local
minima of V (note again that linearization is inconclusive
about their stability). By contrast, the equilibrium point
( x = 0, x& = 0) is unstable, as can be shown from linearization
( &x& = (π / 2 − 1) x) , or simply by noticing that because that point
is a local maximum of V along the x axis, any small deviation
in the x direction will drive the trajectory away from it.
__________________________________________________________________________________________
⊗ Note that: Several Lyapunov function may exist for a given
system and therefore several associated invariant sets may be
derived.
3.5 System Analysis Based on Lyapunov’s Direct Method
How to find a Lyapunov function for a specific problem ?
There is no general way of finding Lyapunov function for
___________________________________________________________________________________________________________
14
Chapter 3 Fundamentals of Lyapunov Theory
Applied Nonlinear Control
Nguyen Tan Tien - 2002.3
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
nonlinear system. Faced with specific systems, we have to use
experience, intuition, and physical insights to search for an
appropriate Lyapunov function.
In this section, we discuss a number of techniques which can
facilitate the otherwise blind of Lyapunov functions.
Lyapunov functions for linear time-invariant systems
Given a linear system of the form x& = A x , let us consider a
quadratic Lyapunov function candidate V& = xT P x , where P
is a given symmetric positive definite matrix. Its derivative
yields
3.5.1 Lyapunov analysis of linear time-invariant systems
Symmetric, skew-symmetric, and positive definite matrices
Definition 3.10 A square matrix M is symmetric if M=MT (in
other words, if ∀i, j M ij = M ji ). A square matrix M is skew-
V& = x& T P x + x T P x& = -xT Q x
where
(3.18)
A T P + P A = -Q
(3.19)
symmetric if M = −M T (i.e., ∀i, j M ij = − M ji ).
(3.19) is so-called Lyapunov equation. Note that Q may be not
p.d. even for stable systems.
⊗ Note that:
- Any square n × n matrix can be represented as the sum of
a symmetric and a skew-symmetric matrix. This can be
shown in the following decomposition
M + MT
M - MT
M=
+
24
24
142
3
1
42
3
skew− symmetric
symmetric
- The quadratic function associated with a skew-symmetric
matrix is always zero. Let M be a n × n skew-symmetric
matrix and x is an arbitrary n × 1 vector. The definition of
skew-symmetric matrix implies that xT M x = − xT M T x .
T
T
T
Since x M x is a scalar, x M x = − x M x which yields
∀x, x T M x = 0
(3.16)
In the designing some tracking control systems for robot,
this fact is very useful because it can simplify the control
law.
- (3.16) is a necessary and sufficient condition for a matrix
M to be skew-symmetric.
Definition 3.11 A square matrix M is positive definite (p.d.) if
x ≠ 0 ⇒ xT M x > 0 .
⊗ Note that:
- A necessary condition for a square matrix M to be p.d. is
that its diagonal elements be strictly positive.
- A necessary and sufficient condition for a symmetric
matrix M to be p.d. is that all its eigenvalues be strictly
positive.
- A p.d. matrix is invertible.
- A .d. matrix M can always be decomposed as
M = U T ΛU
(3.37)
where U T U = I , Λ is a diagonal matrix containing the
eigenvalues of M
- There are some following facts
• λmin (M ) x
2
≤ xT Mx ≤ λmax (M ) x
2
• x T Mx = xT U T ΛUx = z T Λz where Ux = z
• λmin (M ) I ≤ Λ ≤ λmax (M ) I
• zT z = x
2
The concepts of positive semi-definite, negative definite, and
negative semi-definite can be defined similarly. For instance, a
square n × n matrix M is said to be positive semi-definite
(p.s.d.) if ∀x, x T M x ≥ 0 . A time-varying matrix M(t) is
uniformly positive definite if ∃α > 0, ∀t ≥ 0, M (t ) ≥ α I .
Example 3 .17 ______________________________________
4 .
Consider the second order linear system with A =  0
− 8 − 12
If we take P = I , then - Q = P A + A T P =  0 −4  . The
− 4 − 24
matrix Q is not p.d.. Therefore, no conclusion can be draw
from the Lyapunov function on whether the system is stable or
not.
__________________________________________________________________________________________
A more useful way of studying a given linear system using
quadratic functions is, instead, to derive a p.d. matrix P from a
given p.d. matrix Q, i.e.,
• choose a positive definite matrix Q
• solve for P from the Lyapunov equation
• check whether P id p.d.
If P is p.d., then (1 / 2)x T P x is a Lyapunov function for the
linear system. And the global asymptotical stability is
guaranteed.
Theorem 3.6 A necessary and sufficient condition for a LTI
system x& = A x to be strictly stable is that, for any symmetric
p.d. matrix Q, the unique matrix P solution of the Lyapunov
equation (3.19) be symmetric positive definite.
Example 3 .18 ______________________________________
Consider again the second order linear system in Example
p 
p
3.18. Let us take Q = I and denote P by P =  11 12  ,
p
p
21
22 

where due to the symmetry of P, p 21 = p12 . Then the
Lyapunov equation is
 p11 p12   0
4  0 − 8   p11 p12  − 1 0 
 p 21 p 22  − 8 − 12 + 4 − 12  p 21 p 22  =  0 − 1
whose solution is p11 = 5 , p12 = p 22 = 1 . The corresponding
matrix P = 5 1 is p.d., and therefore the linear system is
1 1
globally asymptotically stable.
__________________________________________________________________________________________
3.5.2 Krasovskii’s method
Krasovskii’s method suggests a simplest form of Lyapunov
function candidate for autonomous nonlinear systems of the
___________________________________________________________________________________________________________
15
Chapter 3 Fundamentals of Lyapunov Theory
Applied Nonlinear Control
Nguyen Tan Tien - 2002.3
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
form (3.2), namely, V = f T f . The basic idea of the method is
simply to check whether this particular choice indeed leads to
a Lyapunov function.
Theorem 3.7 (Krasovkii) Consider the autonomous system
defined by (3.2), with the equilibrium point of interest being
the origin. Let A(x) denote the Jacobian matrix of the system,
i.e.,
∂f
A(x ) =
∂x
function for this system. If the region Ω is the whole state
space, and if in addition, V (x) → ∞ as x → ∞ , then the
system is globally asymptotically stable.
3.5.3 The Variable Gradient Method
The variable gradient method is a formal approach to
constructing Lyapunov functions.
To start with, let us note that a scalar function V (x) is related
to its gradient ∇V by the integral relation
If the matrix F = A + A T is negative definite in a
neighborhood Ω , then the equilibrium point at the origin is
asymptotically stable. A Lyapunov function for this system is
V ( x ) = ∇V dx
V (x ) = f T ( x) f ( x)
where ∇V = {∂V / ∂x1 ,K, ∂V / ∂x n }T . In order to recover a
If Ω is the entire state space and, in addition, V (x) → ∞ as
unique scalar function V from the gradient ∇V , the gradient
function has to satisfy the so-called curl conditions
x →∞ ,
then
the
equilibrium
point
is
globally
asymptotically stable.
Example 3 .19 ______________________________________
∫
x
0
∂∇Vi ∂∇V j
=
∂x j
∂xi
(i, j = 1,2,K, n)
Consider the nonlinear system
Note that the ith component ∇Vi is simply the directional
x&1 = −6 x1 + 2 x 2
derivative ∂V / ∂xi . For instance, in the case n = 2 , the above
simply means that
x& 2 = 2 x1 − 6 x2 − 2 x 23
∂∇V1 ∂∇V2
=
∂x2
∂x1
We have
A=
∂ f −6
2

2
∂ x  2 − 6 − 6 x 2 
4
−12

F = A + AT = 
2
 4 − 12 − 12 x 2 
The matrix F is easily shown to be negative definite. Therefore,
the origin is asymptotically stable. According to the theorem, a
Lyapunov function candidate is
V (x) = (−6 x1 + 2 x 2 )
2
+ (2 x1 − 6 x 2 − 2 x 23 ) 2
Since V (x) → ∞ as x → ∞ , the equilibrium state at the
origin is globally asymptotically stable.
__________________________________________________________________________________________
The applicability of the above theorem is limited in practice,
because the Jcobians of many systems do not satisfy the
negative definiteness requirement. In addition, for systems of
higher order, it is difficult to check the negative definiteness of
the matrix F for all x.
Theorem 3.7 (Generalized Krasovkii Theorem) Consider
the autonomous system defined by (3.2), with the equilibrium
point of interest being the origin, and let A(x) denote the
Jacobian matrix of the system. Then a sufficient condition for
the origin to be asymptotically stable is that there exist two
symmetric positive definite matrices P and Q, such that
∀x ≠ 0 , the matrix
F (x) = A T P + PA + Q
is negative semi-definite in some neighborhood Ω of the
origin. The function V (x) = f T (x) f (x) is then a Lyapunov
The principle of the variable gradient method is to assume a
specific form for the gradient ∇V , instead of assuming a
specific form for a Lyapunov function V itself. A simple way
is to assume that the gradient function is of the form
n
∇Vi =
∑a x
(3.21)
ij j
j =1
where the aij ’s are coefficients to be determined. This leads
to the following procedure for seeking a Lyapunov function V
• assume that ∇V is given by (3.21) (or another form)
• solve for the coefficients aij so as to sastify the curl
equations
• assume restrict the coefficients in (3.21) so that V& is
negative semi-definite (at least locally)
• compute V from ∇V by integration
• check whether V is positive definite
Since satisfaction of the curl conditions implies that the above
integration result is independent of the integration path, it is
usually convenient to obtain V by integrating along a path
which is parallel to each axis in turn, i.e.,
V ( x) =
∫
x1
0
∇V1 ( x1 ,0,K,0) dx1 +
∫
x2
0
∇V2 ( x1 ,0,K,0) dx2 + K +
∫
xn
0
∇Vn ( x1 ,0,K,0) dx n
___________________________________________________________________________________________________________
16
Chapter 3 Fundamentals of Lyapunov Theory
Applied Nonlinear Control
Nguyen Tan Tien - 2002.3
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
Example 3 .20 ______________________________________
Let us use the variable gradient method top find a Lyapunov
function for the nonlinear system
Estimating convergence rates for linear system
Let denote the largest eigenvalue of the matrix P by λmax (P ) ,
the smallest eigenvalue of the matrix Q by λmin (Q) , and their
ratio λmax (P) / λmin (Q) by γ . The p.d. of P and Q implies
that these scalars are all strictly positive. Since matrix theory
shows that P ≤ λmax (P ) I and λmin (Q) I ≤ Q , we have
x&1 = −2x1
x& 2 = −2 x 2 + 2 x1 x22
We assume that the gradient of the undetermined Lyapunov
function has the following form
λmin (Q) T
x [ λmax (P ) I ] x ≥ γ V
λmax (P )
This and (3.18) implies that V& ≤ −γ V .This, according to
∇V1 = a11 x1 + a12 x 2
lemma, means that x T Q x ≤ V (0) e −γ t . This together with the
∇V2 = a 21 x1 + a 22 x2
fact xT P x ≥ λmin (P) x(t )
∂∇V1 ∂∇V2
=
∂x2
∂x1
⇒
a12 + x 2
∂a12
∂a
= a 21 + x1 21
∂x 2
∂x1
If the coefficients are chosen to be a11 = a 22 = 1, a12 = a 21 = 0
which leads to ∇V = x , ∇V = x then V& can be computed
1
1
2
2
as
∫
x1
0
x1 dx1 +
2
, implies that the state
x
converges to the origin with a rate of at least γ / 2 .
The curl equation is
V ( x) =
xT Q x ≥
∫
x2
0
x2 dx 2 =
x12 + x 22
2
(3.22)
The convergence rate estimate is largest for Q = I . Indeed, let
P0 be the solution of the Lyapunov equation corresponding to
Q = I is
A T P0 + P0 A = −I
and let P the solution corresponding to some other choice of
Q
A T P + PA = −Q1
Without loss of generality, we can assume that λmin (Q1 ) = 1
This is indeed p.d., and therefore, the asymptotic stability is
guaranteed.
since rescaling Q1 will rescale P by the same factor, and
therefore will not affect the value of the corresponding γ .
Subtract the above two equations yields
If the coefficients are chosen to be a11 = 1, a12 = x 22 ,
A T (P - P0 ) + (P - P0 ) A = −(Q1 - I )
a 21 = 3 x22 , a 22 = 3 , we obtain the p.d. function
Now since λmin (Q1 ) = 1 = λmax (I ) , the matrix (Q1 - I) is
positive semi-definite, and hence the above equation implies
that (P - P0 ) is positive semi-definite. Therefore
V ( x) =
x12 3 2
+ x2 + x1 x 23
2 2
(3.23)
λmax (P ) ≥ λmax (P0 )
whose derivative is V& = −2 x12 − 6 x 22 − 2 x 22 ( x1 x 2 − 3 x12 x22 ) .
We can verify that V& is a locally negative definite function
(noting that the quadratic terms are dominant near the origin),
and therefore, (3.23) represents another Lyapunov function for
the system.
__________________________________________________________________________________________
3.5.4 Physically motivated Lyapunov functions
3.5.5 Performance analysis
Lyapunov analysis can be used to determine the convergence
rates of linear and nonlinear systems.
γ = λmin (Q) / λmax (P )
corresponding to Q = I the larger than (or equal to) that
corresponding to Q = Q1 .
Estimating convergence rates for nonlinear systems
The estimation convergence rate for nonlinear systems also
involves manipulating the expression of V& so as to obtain an
explicit estimate of V . The difference lies in that, for
nonlinear systems, V and V& are not necessarily quadratic
function of the states.
Example 3 .22 ______________________________________
Consider again the system in Example 3.8
A simple convergence lemma
Lemma: If a real function W (t ) satisfies the inequality
W& (t ) + α W (t ) ≤ 0
Since λmin (Q1 ) = 1 = λmin (I ) , the convergence rate estimate
x&1 = x1 ( x12 + x 22 − 2) − 4 x1 x 22
(3.26)
where α is a real number. Then W (t ) ≤ W (0) e −α t
The above Lemma implies that, if W is a non-negative
function, the satisfaction of (3.26) guarantees the exponential
convergence of W to zero.
x& 2 = 4 x12 x2 + x 2 ( x12 + x22 − 2)
Choose the Lyapunov function candidate V = x
2
, its
dV
= −2dt . The
V (1 − V )
solution of this equation is easily found to be
derivative is V& = 2V (V − 1) . That is
___________________________________________________________________________________________________________
17
Chapter 3 Fundamentals of Lyapunov Theory
Applied Nonlinear Control
Nguyen Tan Tien - 2002.3
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
V (x) =
α e −2dt
V ( 0)
.
, where α =
1 − V (0)
1 + α e −2dt
If x(0)
2
= V (0) < 1 , i.e., if the trajectory starts inside the
unit circle, then α > 0 , and V (t ) < α e −2t . This implies that
the norm
x(t )
of the state vector converges to zero
exponentially, with a rate of 1.
However, if the trajectory starts outside the unit circle, i.e., if
V (0) > 1 , then α < 0 , so that V (t ) and therefore x tend to
infinity in a finite time (the system is said to exhibit finite
escape time, or “explosion”).
__________________________________________________________________________________________
3.6 Control Design Based on Lyapunov’s Direct Method
There are basically two ways of using Lyapunov’s direct
method for control design, and both have a trial and error
flavor:
• Hypothesize one form of control law and then finding a
Lyapunov function to justify the choice
• Hypothesize a Lyapunov function candidate and then
finding a control law to make this candidate a real
Lyapunov function
Example 3 .23 Regulator design_______________________
Consider the problem of stabilizing the system &x& − x& 3 + x 2 = u .
__________________________________________________________________________________________
___________________________________________________________________________________________________________
18
Chapter 3 Fundamentals of Lyapunov Theory
Download