# Stability of switched systems in Sense of Lyapunov Theorems, and via Dwell Time

```Kingdom of Saudi Arabia
Ministry of Education
Umm Al-Qura University
Faculty of Applied Sciences
Stability of Switched Systems
in Sense of Lyapunov
Theorems
A Thesis Presented By
Mamdouh Omar Khudaydus
43580372
Supervisor: Dr.Wajdi Kallel
Department of Mathematics
in partial fulfillment of the requirements for the degree of
Master of Science
In
Applied Mathematics
Umm Alqura University, Saudi Arabia
June 1, 2018
Dedicated to my parents and especially to mom and to my lovely wife and
both of my daughter and son.
Abstract
Switched systems are a class of hybrid systems that involve switching between
several subsystems depending on various factors. In general,a switching system consists of a family of continuous-time subsystems.In addition, a rule
that dominates the switching between them.One kind of hybrid system is a
switched linear system which consists of several linear subsystems and a rule
that orchestrates the switching among them.
The existence of disturbance in a system can even cause the instability.Often,
the existence of a common Lyapunov function [CLF] is the useful approach
for studying stability of various control systems. Consider the dynamic system of the form
ẋ(t) = Aσ(t) x(t),
x(t0 ) = x0
where x(t) ∈ Rn is the state, t0 is the initial time and x0 is the initial state.
σ(t) : [t0 , +∞) 7→ {1, 2, &middot; &middot; &middot; , N } is the switching signal.Thus,
Aσ(t) : [t0 , +∞) 7→ {A1 , A2 , &middot; &middot; &middot; , AN }
We can break the entire state space Ω into N subspaces Ωi corresponding
to each subsystem such that no common region among subspaces in manner
such that:
Ωi ∩ Ωj = φ , i 6= j
We can write Ωi = {x ∈ Rn : xT (ATi Pi + Pi Ai )x ≤ 0}.
Now,the problem is :
What is the required conditions that ensure the stability of the
switched system under switching signal when it is composed by :
1. stable subsystems.
2. stable/unstable subsystems.
3. unstable subsystems.
i
Our proposal aims to answer this problem.So, this work consists of three
major chapters:
Chapter 1: We recall some important theorems in stability of Autonomous
and Non-autonomous systems. Also, we review some necessary details background material from the control theory.In addition, we recall some results on existence and uniqueness of solutions to differential
equations.
Chapter 2: We focus on stability of switched systems under arbitrary switching when all subsystems are stable.The existence of qlf of the subsystems plays a dominant role in the stability analysis.
Chapter 3: Consist of two cases.In first case, the switched systems consists
of both Hurwitz stable and unstable subsystems that they are to be
investigated.In the second case, when all the subsystems are unstable.So, we need to determine a sufficient condition ensure the stability
of switched continuous-time systems under strategy of switching signal
ii
Acknowledgements
I would especially like to thank my supervisor Dr.Wajdi Kallel for his patience and which i had the luck to have a great teacher like him.Particular
thanks must also be recorded to Dr.Yousef Jaha who really helped me to
join the Department of Mathematics.Also, I am glad to thank the honor
members of the discussions, Prof.Muntaser Safan (Umm Al-Qura university)
and Associate Professor Dr.Mohammed Eldessoky (The University of King
Abdulaziz).
iii
Contents
1 CHAPTER 1 :Classical Stability
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.1 State-Space Models and Equations . . . . . . . . . .
1.1.2 The Phase Plane of Autonomous Systems . . . . . .
1.1.3 Constructing Phase Portraits . . . . . . . . . . . . .
1.1.4 Qualitative Behaviour of Linear Systems . . . . . . .
1.2 Multiple Equilibria . . . . . . . . . . . . . . . . . . . . . . .
1.3 Existence and Uniqueness . . . . . . . . . . . . . . . . . . .
1.4 Lyapunov Stability . . . . . . . . . . . . . . . . . . . . . . .
1.4.1 Autonomous Systems . . . . . . . . . . . . . . . . . .
1.4.2 Internal Stability . . . . . . . . . . . . . . . . . . . .
1.4.3 Lyapunov Direct Method . . . . . . . . . . . . . . . .
1.4.4 Domain of Attraction . . . . . . . . . . . . . . . . . .
1.4.5 Lyapunov Quadratic Form . . . . . . . . . . . . . . .
1.5 Linear Systems and Linearization . . . . . . . . . . . . . . .
1.5.1 Investigating the Asymptotic Stability of the LTI dynamical system (1.31) via Theorem (1.4.5) . . . . . .
1.5.2 Lyapunov Indirect Method . . . . . . . . . . . . . . .
1.6 The Invariant Principle(LaSalle’s Invariance Principle) . . .
1.7 The Partial Stability of Nonlinear Dynamical Systems . . . .
1.7.1 Addressing Partial Stability When Both Initial Conditions (x10 , x20 ) Lie in a Neighborhood of the Origin .
1.8 Nonautonomous Systems . . . . . . . . . . . . . . . . . . . .
1.8.1 The Unification between Time-Invariant Stability Theory and Stability Theory for Time-Varying Systems
Through The partial Stability Theory . . . . . . . . .
iv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
1
3
3
11
20
22
25
25
27
29
31
34
38
.
.
.
.
43
45
48
52
. 57
. 58
. 61
1.8.2
1.9
Rephrase The Partial Stability Theory for Nonlinear
Time-Varying Systems . . . . . . . . . . . . . . . . . . 63
Stability Theorems for Linear Time-Varying Systems and Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
1.9.1 Applying Lyapunov Indirect Method on Stabilizing Nonlinear Controlled Dynamical System . . . . . . . . . . . 74
2 CHAPTER 2 :Stability of Switched Systems
76
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.2 Classes of Hybrid and Switched Systems . . . . . . . . . . . . 76
2.2.1 State-Dependent Switching . . . . . . . . . . . . . . . . 77
2.2.2 Time-Dependent Switching . . . . . . . . . . . . . . . . 77
2.2.3 Stability to Non-Stability Switched Systems and ViceVersa . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.2.4 Autonomous Switching versus Controlled Switching . . 80
2.3 Sliding and Hysteresis Switching . . . . . . . . . . . . . . . . . 80
2.4 Stability Under Arbitrary Switching . . . . . . . . . . . . . . . 84
2.4.1 Commuting Linear Systems . . . . . . . . . . . . . . . 85
2.4.2 Common Lyapunov Function (CLF) . . . . . . . . . . . 85
2.4.3 Switched Linear Systems . . . . . . . . . . . . . . . . . 91
2.4.4 Solution of a Linear Switched System Under Arbitrary
Switching Signal . . . . . . . . . . . . . . . . . . . . . 93
2.4.5 Commuting Nonlinear Systems . . . . . . . . . . . . . 94
2.4.6 Commutation and The Triangular Systems . . . . . . . 95
2.5 Stability Under Constrained Switching . . . . . . . . . . . . . 95
2.5.1 Multiple Lyapunov Functions (MLF) . . . . . . . . . . 95
2.5.2 Stability Under State-Dependent Switching . . . . . . . 99
2.6 Stability under Slow Switching . . . . . . . . . . . . . . . . . . 103
2.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 103
2.6.2 Dwell Time . . . . . . . . . . . . . . . . . . . . . . . . 103
2.7 Stability of Switched System When all Subsystems are Hurwitz107
2.7.1 State-Dependent Switching . . . . . . . . . . . . . . . . 111
3 CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz
114
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
v
3.2
3.3
3.4
Stability of Switched System When all Subsystems are Unstable Except the corresponding Negative Subsystems Matrices
(−Ai ) are Hurwitz . . . . . . . . . . . . . . . . . . . . . . . . 115
3.2.1 Average Dwell Time (ADT) . . . . . . . . . . . . . . . 116
3.2.2 Determine ADT Based on Positive Symmetric Matrix Pi 119
Stability of Switched Systems When All Subsystems Consist
of Stable and Unstable Subsystems . . . . . . . . . . . . . . . 128
3.3.1 Stability When One Subsystem is Stable and the Other
One is Unstable . . . . . . . . . . . . . . . . . . . . . . 129
3.3.2 Stability When a Set of Subsystems are Stable and the
Other Sets are Unstable . . . . . . . . . . . . . . . . . 130
Stability of Switched Systems When All Subsystems Unstable
via Dwell-Time Switching . . . . . . . . . . . . . . . . . . . . 132
3.4.1 Stabilization for Switched Linear System . . . . . . . . 134
3.4.2 Discretized Lyapunov Function Technique . . . . . . . 136
4 Appendix
141
4.1 Matrices and Matrix Calculus . . . . . . . . . . . . . . . . . . 141
4.2 Vector and Matrix Norms . . . . . . . . . . . . . . . . . . . . 142
4.3 Matrices p-norms . . . . . . . . . . . . . . . . . . . . . . . . . 143
4.4 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.5 Matrix Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.6 Topological Concepts in Rn . . . . . . . . . . . . . . . . . . . 148
4.6.1 Convergence of Sequences . . . . . . . . . . . . . . . . 148
4.6.2 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.7 Mean Value Theorem . . . . . . . . . . . . . . . . . . . . . . . 150
4.8 Supremum and Infimum Bounds . . . . . . . . . . . . . . . . . 150
4.9 Jordan Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4.10 The Weighted Logarithmic Matrix Norm and Bounds of The
Matrix Exponential . . . . . . . . . . . . . . . . . . . . . . . . 154
4.11 Continuous and Differentiable Functions . . . . . . . . . . . . 155
4.12 Gronwall-Bellman Inequality . . . . . . . . . . . . . . . . . . . 156
4.13 Solutions of the State Equations . . . . . . . . . . . . . . . . . 159
4.13.1 Solutions of Linear Time-Invariant(LTI) State Equations159
4.13.2 Solutions of Linear Time-Variant(LTV) State Equations 162
4.13.3 Computing The Transition Matrix Through The PeanoBaker Series . . . . . . . . . . . . . . . . . . . . . . . . 169
4.14 Solutions of Discrete Dynamical Systems . . . . . . . . . . . . 173
vi
4.14.1 Discretization of Continuous-Time Equations . . . . .
4.14.2 Solution of Discrete-Time Equations . . . . . . . . .
4.15 Solutions of Van Der Pol’s Equation . . . . . . . . . . . . . .
4.15.1 Parameter Perturbation Theory . . . . . . . . . . . .
4.15.2 Solution of Van Der Pol Equation Via Parameter Perturbations Method . . . . . . . . . . . . . . . . . . .
4.16 Limit Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
173
177
178
178
. 179
. 183
Conclusion
187
References
188
vii
Chapter 1
Classical Stability
1.1
1.1.1
Introduction
State-Space Models and Equations[1]
Dynamical systems are modeled by a finite number of coupled first-order
ordinary differential equations:
ẋ1 = f1 (t, x1 , x2 , x3 , . . . , xn , u1 , u2 , u3 , . . . , up )
ẋ2 = f1 (t, x1 , x2 , x3 , . . . , xn , u1 , u2 , u3 , . . . , up )
..
.
ẋn = fn (t, x1 , x2 , x3 , . . . , xn , u1 , u2 , u3 , . . . , up )
where f : [0, ∞) &times; D1 &times; D2 7→ Rn such that D1 ,D2 ⊂ Rn and xn , un are
components of the state variables and the input variables, respectively.
To writing the above equations in a compact form known as vector notation, we define:






u1
f1 (t, x, u)
x1
 u2 
 f2 (t, x, u) 
 x2 



 , u=
x=
,
f
(t,
x,
u)
=
 .. 


..
 &middot;&middot;&middot; 
 . 


.
xn
up
fn (t, x, u)
rewrite the n first-order differential equations as follow,
ẋ = f (t, x, u)
1
(1.1)
The equation (1.1) is called the state equation and x is the state and u is
the input.
Also, there is another equation associates with equation (1.1):
y = h (t, x, u)
(1.2)
which defines a q-dimensional output vector.We call equation (1.2) the output equation and refer to equation (1.1) and equation (1.2) together as the
state-space model, or simply the state model.
We can model physical systems in this form by choosing the state variables.
Many times we will deal with the state equation without presence of an input
u , that is, the so-called unforced state equation
ẋ = f (t, x)
(1.3)
Working with unforced state equation does not mean that the input to the
system is zero. It could be that the input has been specified as a given
function of time, u = γ (t), a given feedback function of the state, u = γ (x)
or both u = γ (t, x). Substitution of u = γ in equation (1.1) eliminates u
and yields an unforced state equation.
Definition 1.1.1. [1] When the function f of equation (1.3) does not depend
on time t ; that is,
ẋ = f (x)
(1.4)
in this case the system is said to be the autonomous or time invariant.
This means that the system is invariant to shifts in the time origin, since
changing the time variable from t to τ = t − α does not change the righthand side of the state equation.
The system (1.3) is called non-autonomous or time-varying.
Definition 1.1.2. [1] A point x = x∗ in the state space is said to be an
equilibrium point of ẋ = f (t, x) if t has the property that whenever the
state of the system start at initial point x∗ it will remain at x∗ for all future
time.
The equilibrium points of the autonomous system (1.4) are the real roots
of the equation
f (x) = 0
2
1.1.2
The Phase Plane of Autonomous Systems
Autonomous systems occupy an important place in study of non-linear systems because solution trajectories can be represented by curves in the plane.
A second-order autonomous system is represented by two scalar differential equations
ẋ1 = f1 (x1 , x2 )
(1.5)
ẋ2 = f2 (x1 , x2 )
(1.6)
Let x(t) = (x1 (t), x2 (t)) be the solution of (1.5), (1.6) that starts at initial
state x0 = (x10 , x20 )T where x0 = x(0) .
The x1 − x2 plane of x1 (t) and x2 (t), ∀ t ≥ 0 is a curve that passes through
the point x0 .
This curve is called a trajectory or orbit of (1.5), (1.6) from x0 .The
x1 − x2 plane is called the state-plane or phase-plane.
Equations (1.5), (1.6) can be written in vector form:
ẋ = f (x)
Where f (x) is the vector (f1 (x), f2 (x)), thus we consider f (x) as a vector
field on the state plane.
The
p length of arrow on state plane at a given point x is proportional to
f12 (x) + f22 (x).The family of all trajectories or solution curves is called the
phase portrait of equations (1.5) and (1.6).
An approximate picture of the phase portrait can be constructed by plotting trajectories from a large number of initial states spread all over the
x1 − x2 plane.
1.1.3
Constructing Phase Portraits
There are a number of techniques[5] that constructing phase plane trajectories for linear and nonlinear systems.
The Common ones of this methods are : Analytical Method and Isoclines
Method.
• Analytical Method: This method involves the analytic solution of the
differential equations that describing the system. So there are two
3
ways to construct the equation of trajectories. Both techniques lead to
solution of the form
g(x1 , x2 , x0 ) = 0
(1) First Technique : Involves solving equations
ẋ1 = f1 (x1 , x2 )
ẋ2 = f2 (x1 , x2 )
for x1 and x2 as a functions of time t, namely,
x1 = g1 (t)
x2 = g2 (t)
and then eliminating time t, leading to a function of the form
g(x1 , x2 , x0 ) = 0
(2) Second Technique: Involves eliminating the time variable using
dx2
f2 (x1 , x2 )
=
dx1
f1 (x1 , x2 )
Then solving this equation to obtain a function for x1 , x2 .
• Method of Isoclines[1]
Isoclines is a method that can be applied to construct a phase plane
when the system can not be solved analytically.
An isocline is defined to be a locus of points with a given tangent slope,
and each point consist of a short line segment takes the tendency of
the specified slope as depicted in the figure 1.1.
4
Figure 1.1: Isoclines method with slope α
The slope α of the isocline is defined to be:
f2 (x1 , x2 )
dx2
=
= slope α
dx1
f1 (x1 , x2 )
So the slope of the trajectories is seen to be:
f2 (x1 , x2 ) = α f1 (x1 , x2 )
with all points on the isocline have the same tangent slope α .
By taking a different values for the slope α, a set of isoclines can be
drawn.
(x1 ,x2 )
The procedure is to plot the curve ff12 (x
= α in the x1 − x2 plane
1 ,x2 )
and along this curve draw short line segments having the slope α .
These line segments are parallel and their direction are determined by
the sign of f1 (x) and f2 (x) at x, as seen in Figure 1.2
5
Figure 1.2: Isocline with positive slope
(x1 ,x2 )
The curve ff12 (x
= α is known as isocline.This procedure is repeated
1 ,x2 )
for sufficiently many values of the constant α until the plain is filled with
isoclines.Then, starting from a given initial point x0 , one can construct
the trajectory from x0 by moving in the direction of the short line
segments from one isocline to the next.
Example 1.1.1 (Example for first Technique).
Consider the linear second-order differential equation
ẍ + x = 0
Assume that the mass is initially at length x0 (i.e, at rest state)
The general solution of the equation is : x(t) = x0 cos t.
By differentiation :
ẋ(t) = −x0 sin t
Now,transferring the state-space equations to equation of trajectories required
that to eliminating the time t.
Hence, to eliminate the time t
ẋ2 (t) = x0 2 sin2 t
∵ sin2 t = 1 − cos2 t
∴ ẋ2 (t) = x20 (1 − cos2 t)
∴ ẋ2 (t) = x20 − x20 cos2 t
6
∴ ẋ2 = x20 − x2 (t)
Thus, the equation of trajectories is :
ẋ2 + x2 = x20
This equation represents a circle in the phase plane. With different initial
values we can obtain different circles plotted on the phase plan.
The phase portrait of this system is plotted with MATLAB.
By transform the system to state-space form:
Let x = x1 and ẋ = x2
ẋ1 = x2
ẋ2 = −x1
MATLAB.m 1.
[x1, x2]=meshgrid(-5:0.5:5,-5:0.5:5);
x1dot = x2 ;
x2dot = -x1 ;
quiver(x1,x2,x1dot,x2dot);
xlabel(x);
ylable(x(.));
Also we see in Figure 1.3 that the system trajectories neither Converges
to the origin nor diverge to infinity.
7
Figure 1.3: Phase portrait of ẋ2 + x2 = x20
Example 1.1.2. [1] Consider the system (Example for Second Technique)
ẍ + x = 0
assume that the mass is initially at length x0 .
By transform the system to state-space form:
let x = x1 and ẋ = x2
ẋ1 = x2
ẋ2 = −x1
Now,set
ẋ1 =
dx2
dx1
, ẋ2 =
dt
dt
Thus,
dx1 = x2 dt
dx2 = −x1 dt
By integrating of both sides :
Z
Z
x1 dx1 + x2 dx2 = 0 → x21 + x22 = C where C = x20
8
Hence ,we obtain the phase-plane trajectory equation :
x21 + x22 = x20 or ẋ2 + x2 = x20
Which represent a cycle in the phase plane and known as limit cycle (refer
to Appendix).
Example 1.1.3. consider the dynamical equation (Example for Isoclines
Method)
ẍ + x = 0
The slope of the trajectories is given by
ẋ1 = x2 = f1 (x1 , x2 )
ẋ2 = −x1 = f2 (x1 , x2 )
Thus,
x1
dx2
=− =α
dx1
x2
Therefore, the isocline equation for a slope α is
x1 + α x 2 = 0
Which is a straight line equation .
Along the straight line, we can draw a lot of short line segments with slope α
and a set of isoclines can be plotted with different values of the slope α as
depicted in Figure 1.4.
9
Figure 1.4: Phase portrait of ẍ + x = 0
Example 1.1.4. Consider the pendulum equation without friction:
ẋ1 = x2
ẋ2 = − sin x1
The slope function s(x) is given by
s(x) =
−sinx1
f2 (x1 , x2 )
=
=α
x2
f1 (x1 , x2 )
Hence, the isoclines are defined by
1
x2 = − sin x1
α
10
Figure 1.5: Graphical construction of the phase portrait of the pendulum
equation without friction by the isocline method
Figure 1.5 shows the isoclines for several values of α.One can easily sketch
the trajectory starting at any point, as shown In the figure for the trajectory
starting at ( π2 , 0).Here one might see that the trajectory is a closed curve. A
closed trajectory shows that there is a periodic solution and the system has a
sustained oscillation.
1.1.4
Qualitative Behaviour of Linear Systems
We describe here the meaning of the visual motion of the system trajectories
plotted in the phase portrait, based on the eigenvalues of the nonlinear/linear
dynamic system
11
Consider the linear time-invariant system
ẋ = Ax
(1.7)
Where A is a 2 &times; 2 real matrix.The solution of equation (1.7) for a given
initial state x0 is given by
x(t) = x0 P eJr t P −1
Where jr is the real Jordan form of A and P is a real non-singular matrix
such that P −1 AP = Jr .
In general, the solution of (1.7) is given by
x(t) = x0 eAt
Where eAt can be formally defined by the convergence power series
eAt =
∞ k k
X
t A
i=0
k!
Note that in general
eAt = P ejr t P −1
Since
eAt =
∞ k k
X
t A
i=0
1
= I + At + A2 t2 + &middot; &middot; &middot;
k!
2
1
= I + P jr P −1 t + (P jr P −1 )2 t2 + &middot; &middot; &middot;
2
1
= I + P jr P −1 t + P jr P −1 P jr P −1 t2 + &middot; &middot; &middot;
2
Multiply both sides from the left by P −1 and from the right by P :
1
P −1 eAt P = P −1 IP + P −1 P jr P −1 P t + P −1 P jr P −1 P jr P −1 P t2 + &middot; &middot; &middot;
2
1
P −1 eAt P = I + jr t + jr2 t2 + &middot; &middot; &middot;
2
−1 At
jr t
P e P =e
⇒ eAt = P ejr t P −1
12
Hence, the general solution of (1.7) will take the form
x(t) = x0 P ejr t P −1
Depending on determining the eigenvalues of A using the characteristic equation
det (A − λI) = 0
There exists a real, non-singular matrix P such that P −1 AP is so-called
Jordan form.
The real Jordan form may take one of the following three cases:
λ1 0
λ k
α −β
,
,
0 λ2
0 λ
β α
where k is either 0 or 1.
We have here to distinguish between three cases:
• case 1: The
λ1 , λ2 are real and distinct (λ1 6= λ2 6= 0)
eignevalues
λ1 0
Let jr =
0 λ2
Let P = [v1 , v2 ] where v1 , v2 are the real eigenvectors associated with
λ1 and λ2 .
Thus, the change of coordinates z = P −1 x transform the system into:
z = P −1 x ⇒ ż = P −1 ẋ = P −1 Ax
⇒∵ A = P jr P −1
⇒ ż = P −1 P jr P −1 x = jr P −1 x
⇒ ż = jr z
Hence
ż1
λ1 0
z1
λ1 z1
=
=
ż2
0 λ2
z2
λ2 z2
So,the two decoupled first-order differential equations
ż1 = λ1 z1
ż2 = λ2 z2
13
whose solution is given by
z1 (t) = z10 eλ1 t , z2 (t) = z20 eλ2 t where z10 , z20 are initail states
Eliminating t between the two equation,
ln(
⇒ ln(
z1
z2
) = λ1 t, ln( ) = λ2 t
z10
z20
z2
1
z1
z1 1
z1 λ2
) = λ2 [ ln( )] = λ2 ln( ) λ1 = ln( ) λ1
z20
λ1
z10
z10
z10
∴
z2
z1 λ2
= ( ) λ1
z20
z10
we obtain,
λ1
z2 = cz1λ2
(1.8)
λ2
where c = z20 /(z01 ) λ1
The phase portrait of this system is given by the family of curves generated from (1.8) by allowing the constant c to take arbitrary values.
The shapes of the phase portrait have many cases depends on the signs
of λ1 and λ2 .
(i) When λ2 &lt; λ1 &lt; 0, the equilibrium point has a Stable-node:
In this case, both exponential terms eλ1 t and eλ2 t tend to zero as
t→∞.
The slope of the curve is given by
dz2
λ2 ( λ2 −1)
= c z1 λ1
dz1
λ1
Since ( λλ21 − 1) is positive ,the slope of the curve approaches zero
as |z1 | → 0 and approach ∞ as |z1 | → ∞.
As the trajectory approaches the origin, it becomes tangent to
the z1 axis; as it approaches ∞, it becomes parallel to the z2
axis.These observations allow us to sketch the typical family of
trajectories shown in figure 1.6 .
14
Figure 1.6: Phase portrait of a stable node in model coordinates
When transformed back into the x-coordinates, the family of trajectories will have the typical portrait shown in Figure 1.7(a).
In this case, the equilibrium point x = 0 is called a stable node.
(ii) When λ2 &gt; λ1 &gt; 0, the equilibrium point has Unstablenode:
The phase portrait will retain the character of Figure 1.7(a) but
with the trajectory directions reversed, since the exponential terms
eλ1 t and eλ2 t grow exponentially as t increases as shown in Figure
1.7(b).
Figure 1.7: Phase portrait for (a) a stable node ; (b) an unstable node
(iii) When λ2 &lt; 0 &lt; λ1 , the equilibrium point has a Saddlepoint:
In this case, eλ1 t → ∞ while eλ2 t → 0 as t → ∞.Hence, we
call λ2 the stable eigenvalue and λ1 the unstable eigenvalue.The
trajectory equation (1.8) will have a negative exponent [ λλ12 ]. Thus,
the family of trajectories in the z1 − z2 plane will take the typical
form shown in Figure 1.8 trajectories become tangent to z1 -axis
as |z1 | → ∞ and tangent to the z2 -axis as |z1 | → 0.
15
Figure 1.8: Phase portrait of a saddle point
• case 2: The
λ1 , λ2 are complex; that is, λ1,2 = α&plusmn;jβ
eigenvalues
α −β
Let jr =
β α
The change of coordinates z = P −1 x transforms the system (1.7) into
the form
ż1 = αz1 − βz2
ż2 = βz1 + αz2
(1.9)
(1.10)
The solution of this equation is oscillatory and can be expressed more
conveniently in the polar coordinates
r2 = z12 + z22 ; θ = tan−1 (
z2
z2
) or tan θ =
z1
z1
Thus,
2rṙ = 2z1 z˙1 + 2z2 ż2 ; sec2 θθ̇ =
z1 z˙2 − z2 z˙1
z12
Substituting of (1.9), (1.10) into above equations, yields
rṙ = z1 (αz1 −βz2 )+z2 (βz1 +αz2 ) ; sec2 θθ̇ = (
[z1 (βz1 + αz2 ) − z2 (αz1 − βz2 )]
)
z12
ṙ = α(z12 + z22 ) = αr; sec2 θθ̇ = β(1 + tan2 θ) = β sec2 θ
So we have two first-order differential equations:
ṙ = αr
16
θ̇ = β
The solution with a given initial state (r0 , θ0 ) is given by
r(t) = r0 eαt ;
θ(t) = θ0 + βt
which defines a logarithmic spiral in the z1 − z2 plane.Depending on
the value of α, the trajectory will take shapes as shown in Figure 1.9.
– When α &lt; 0, the equilibrium point x = 0 is referred to as a stable
focus because the spiral converges to the origin.
– when α &gt; 0, the equilibrium point x = 0 is referred to as a
unstable focus because the spiral diverges away from the origin.
– When α = 0, the equilibrium point x = 0 is referred to as a
center because the trajectory is a circle of radius r0 . As depicted
in Figure 1.10
Figure 1.9: typical trajectories in the case of complex eigenvalues (a)α &lt; 0,
(b)α &gt; 0, (c)α = 0
Figure 1.10: Phase portrait for (a) a stable focus, (b) an unstable focus, (c)a
center
• Case 3: The eigenvalues are real and equal; that is, λ1 = λ2 =
λ 6= 0
17
λ k
Let jr =
0 λ
The change of coordinates z = P 1 x transforms the system (1.7) into
the form
ż1 = λz1 + kz2
ż2 = λz2
whose solution with a given initial state (z10 , z20 ), is given by
z1 (t) = (z10 + z20 kt)eλt
z2 (t) = z20 eλt
Eliminating t, we obtain the trajectory equation
z1 = z2 [
z10 k z2
+ ( )]
z20 λ z20
Figure 1.11: Phase portrait for the case of nonzero multiple eigenvalues when
k=0:(a) λ &lt; 0, (b)λ &gt; 0
Figure 1.12: Phase portrait for the case of nonzero multiple eigenvalues when
k=1:(a) λ &lt; 0, (b)λ &gt; 0
The equilibrium point x = 0 is referred to as:
18
– stable node if λ &lt; 0.
– unstable node if λ &gt; 0 .
Figure 1.11 shows the form of the trajectories when k = 0, while Figure
1.12 shows their form when k = 1.
• Case 4: The eigenvalues
λ1 = 0, λ2 &gt; 0 or λ2 &lt; 0:
0 0
Let jr =
0 λ2
Let P = [v1 , v2 ] where v1 , v2 are the real eigenvector associated with
λ1 and λ2 .Thus,the change of coordinates z = P −1 x transform the
system into:
ż1 = 0
ż2 = λ2 z2
whose solution is
z1 (t) = z10 ;
z2 (t) = z20 eλ2 t
The exponential term will grow or decay depending on the sign of λ2 :
– when λ2 &lt; 0, all trajectories converge to the equilibrium subspace
– when λ2 &gt; 0, all trajectories diverge away from the equilibrium
subspace
All the cases are shown as depicted in Figure 1.13.
Figure 1.13: Phase portrait for (a)λ1 = 0, λ2 &lt; 0, (b) λ1 = 0, λ2 &gt; 0
19
• Case 5: The eigenvalues
are at the origin; that is, λ1 = λ2 = 0:
0 k
Let jr =
0 λ2
The change of variables ż = P −1 x results in
ż1 = z2
z˙2 = 0
whose solution is
z1 (t) = z10 + z20 t,
z2 (t) = z20
The term z20 t will increase or decrease depending on the sign of z20 as
depicted in Figure 1.14
Figure 1.14: Phase portrait when λ1 = λ2 = 0
1.2
Multiple Equilibria
The linear system (1.7) has an isolated equilibrium point at x = 0 if A has
no zero eigenvalues, that is, if det A 6= 0.
When det A = 0, the system has a continuum of equilibrium points. A
nonlinear system can have multiple isolated equilibrium points.
Example 1.2.1. The state-space model of a tunnel diode circuit (refer to
Appendix) is given by
1
ẋ1 = [−h(x1 ) + x2 ]
C
1
ẋ2 = [−x1 − Rx2 + u]
L
20
Assume the circuit parameters are u = 1.2V , R = 1.5kΩ, C = 2pF, and L =
5uH .Measuring time in nanoseconds and the currents x2 and h(x1 ) are in
mA, the state-space model is given by
x˙1 = 0.5[−h(x1 ) + x2 ]
x˙2 = 0.2(−x1 − 1.5x2 + 1.2)
Suppose h(&middot;) is given by
h(x1 ) = 17.76x1 − 103.79x21 − 226.31x41 + 83.72x51
setting ẋ1 = ẋ2 = 0 and solving for the equilibrium points, it can be verified
that there are three equilibrium points at (0.063, 0.758), (0.285, 0.61), and
(0.884, 0.21).The phase portrait of the system is shown in Figure 1.15 .
Figure 1.15: Phase portrait of the Tunnel diode circuit
Examination of the phase portrait shows that all trajectories tend to either
the point Q1 or the point Q3 .In experimental, we shall observe one of the two
steady-state operating points Q1 or Q3 , depending on the initial capacitor
voltage and inductor current.This tunnel diode circuit with multiple equilibria
is referred to as a bi-stable circuit, because it has two steady-state operating
points.
21
1.3
Existence and Uniqueness
In this section we will recalling some elements of mathematical analysis which
will be used throughout this section.
For the mathematical model to predict the future state of the system
from its current state at t0 , the initial value problem
ẋ = f (t, x), x(t0 ) = x0
must have a unique solution.
The uniqueness and existence can be ensured by imposing some constraints on the right-hand side function f (t, x) .
Let us define
ẋ = f (t, x (t)) ∀ t ∈ [t0 , t1 ]
(1.11)
If f (t, x) is continuous in t and x, then the solution x (t) will be continuously
differentiable. A differentiable equation with a given initial condition might
have several solutions.
Example 1.3.1. Consider the scalar function
1
ẋ = x 3
with
x (0) = 0
(1.12)
3
has solution x (t) = 23 t 2 .This solution is not unique, since x (t) = 0 is
another solution.Noting that the right-hand side of (1.12) is continuous in
x but continuity of f (t, x) is not sufficient to ensure uniqueness of the solution.Extra conditions must be imposed on the function f .
Definition 1.3.1. [1] Let D ⊂ Rn and let f : D 7→ Rn .Then:
• f is Lipschitz continuous at x0 ∈ D if there exists a Lipschitz constant
L = L(x0 ) &gt; 0 and a neighborhood M ⊂ D of x such that
||f (x) − f (y)|| ≤ L||x − y|| , x, y ∈ M
• f is Lipschitz continuous on D if f is Lipschitz continuous at every
point in D.
• f is uniformly Lipschitz continuous on D if there exists a Lipschitz
constant L &gt; 0 such that
||f (x) − f (y)|| ≤ L||x − y|| , x, y ∈ D
22
• f is globally Lipschitz continuous, if f is uniformly Lipschitz continuous on D = Rn .
Theorem 1.3.1. (Local Existence and Uniqueness) [1]: Let f (t, x) be
piecewise continuous in t and satisfy the Lipschitz condition
||f (t, x) − f (t, y) || ≤ L||x − y|| , ∀ x, y ∈ B,
where B = {x ∈ Rn : ||x − x0 || ≤ r} , ∀t ∈ [t0 , t1 ]
(1.13)
then, there exists some δ &gt; 0 such that the state equation
ẋ = f (t, x)
with x (t0 ) = x0
has a unique solution over [t0 , t0 + δ].
A function f (x) is said to be globally Lipschitz if it is Lipschitz on Rn .
The same terminology is extended to a function f (t, x), provided the Lipschitz
condition holds uniformly in t for all t in a given interval of time.
Corollary 1.3.1. [1] Let D ⊂ Rn and let f : D 7→ Rn be continuous on D.
If f 0 exists and is continuous on D, then f is Lipschitz continuous on D.
Example 1.3.2. :
1
The function f (x) = x 3 , which was used in (1.12), is not locally Lipschitz
2
at x = 0 since f 0 (x) = 31 x− 3 → ∞ as x → 0.On the other hand, if f 0 (x)
is bounded by a constant k over the interval, then f (x) is Lipschitz on the
same interval with Lipschitz constant L = k.
Lemma 1.3.1. Let
f : [a, b] &times; D → Rm be continuous for some domain
D ⊂ Rn .Suppose ∂f
(called the Jacobian matrix) exists and is continuous
∂x
on [a, b]&times;D.If, for convex subset W ⊂ D, there is a constant L ≥ 0 such that
||
∂f
(t, x) || ≤ L
∂x
on [a, b] &times; W , then
||f (t, x) − f (t, y) || ≤ L||x − y||
∀ t ∈ [a, b] ,
x, y ∈ W
Lemma 1.3.2.
∂f [1] Let f (t, x) be continuous on [a, b] &times; D,for some domain
n
D ⊂ R .If ∂x exists and is continuous on [a, b] &times; D, then f is locally
Lipschitz in x on [a, b] &times; D.
23
exists and is continuous on [a, b] &times;
Lemma 1.3.3. If f (t, x) and ∂f
∂x
n
R ,then f is globally Lipschitz in x on [a, b] &times; Rn if and only if ∂f
∂x
in uniformly bounded on [a, b] &times; Rn .
Example 1.3.3. Consider the function


−x1 + x1 x2

f (x) = 
x 2 − x1 x2
where f (x) is continuously differentiable on R2 .Hence, it is locally Lipschitz
is not uniformly bounded on
on R2 .It is not globally Lipschtiz since ∂f
∂x
2
R .Suppose we are interested in calculating a Lipschitz constant over
W = x ∈ R2 : |x1 | ≤ a1 , |x2 | ≤ a2
The Jacobian matrix ∂f
is given by
∂x


−1 + x2
x1
∂f

= 
∂x
−x2
1 − x1
Using || &middot; ||∞ for vectors in Rn and the induced matrix norm for matrices,
we have
∂f
|| || = max {|−1 + x2 | + |x1 | , |x2 | + |1 − x1 | }
∂x ∞
All points in W satisfy
|−1 + x2 | + |x1 | ≤ 1 + a1 + a2
|−x2 | + |1 − x1 | ≤ 1 + a1 + a2
Hence,
∂f
|| ≤ 1 + a1 + a2 = L
∂x ∞
Theorem 1.3.2. (Global Existence and Uniqueness) [1]
Suppose f (t, x) is piecewise continuous in t and satisfies
||
||f (t, x) − f (t, y) || ≤ L||x − y|| ∀ x, y ∈ Rn , ∀ t ∈ [t0 , t1 ] .
then, the state equation
ẋ = f (t, x)
,
has a unique solution over [t0 , t1 ] .
24
x (t0 ) = x0
Example 1.3.4. Consider the linear system
ẋ = A (t) x + g (t) = f (t, x)
where A (&middot;) and g (&middot;) are piecewise continuous functions of t .Over any finite
interval of time [t0 , t1 ], the elements of A (t) and g (t) are bounded.Hence,
||A (t) || ≤ a and ||g (t) || ≤ b ,where can be any norm of Rn and is the
induced matrix norm .
The condition of Theorem (1.3.2) are satisfied since,
||f (t, x) − f (t, y) || = ||A (t) (x − y) || ≤ ||A (t) || ||(x − y)||
≤ a ||x − y||
∀ x, y ∈ Rn , ∀ t ∈ [t0 , t1 ]
Therefore, theorem (1.3.2) shows that the linear system has a unique solution
over [t0 , t1 ] .
1.4
Lyapunov Stability
1.4.1
Autonomous Systems
Consider the autonomous system
ẋ = f (x(t))
(1.14)
where f : D 7→ Rn is a locally Lipschitz and x(t) is the system state vector.
Suppose x̄ ∈ D is an equilibrium point of (1.14) ; that is f (x̄) = 0.
Our goal is to characterize and study stability of x̄. For convenience, we state
all definitions and theorems for the case when the equilibrium point is at the
origin of Rn ; that is, x̄ = 0, because any equilibrium point can be shifted to
the origin via a change of variables.
Suppose x̄ 6= 0, so by change of variables y = x − x̄ the derivative of y is
given by
def
ẏ = ẋ = f (x) = f (y + x̄) = g(y); g(0) = 0
In the new variable y, the system has equilibrium at the origin .We shall
always assume that f (x) satisfies f (0) = 0 and study stability of the origin
x=0.
25
Definition 1.4.1. [1] The equilibrium point 0 of (1.14) is:
1. Stable if, ∀ &gt; 0, ∃ δ = δ() &gt; 0 such that
kx(0)k &lt; δ ⇒ kx(t)k &lt; ∀ t ≥ 0
2. Unstable if not stable.
3. Asymptotically stable, if it is stable and there exist δ &gt; 0 can be chosen
such that
kx (0)k &lt; δ ⇒ lim x (t) = 0
t→∞
4. Exponentially stable(locally), if there exist positive constants α, β and
δ such that if ||x(0)|| &lt; δ, then
||x(t)|| ≤ αe−βt ||x(0)|| ∀t ≥ 0
5. Asymptotically stable(globally), if it is stable and
∀ x(0) ∈ Rn ⇒ lim x(t) = 0
t7→∞
6. Exponentially stable(globally), if ∃ α, β &gt; 0 such that
||x(t)|| ≤ αe−βt ||x(0)|| , ∀ t ≥ 0, ∀ x(0) ∈ Rn
points one and two of Definition (1.4.1) are illustrated by Figures 1.16
and 1.17.
Figure 1.16: Lyapunov stability of an equilibrium point
26
Figure 1.17: Asymptotic stability of an equilibrium point
The − δ arguments demonstrates that if the origin is stable, then, for
any value of , we must produce a value of δ , possibly dependent on , such
that a trajectory starting in a δ neighborhood of the origin will never leave
the neighborhood.
1.4.2
Internal Stability
Internal Uniformly Stability[9]
Internal stability deals with boundedness and asymptotic behaviour of solution of the zero-input linear state equation
ẋ(t) = A(t)x(t) , x(t0 ) = x0
(1.15)
where boundedness properties hold regardless of the choice of fixed t0 and
various initial states x0 .
Theorem 1.4.1. [9] We say that equation (1.15) is uniformly stable if
there exists a constant γ &gt; 0 such that for any fixed t0 and x0 the corresponding solution satisfies the inequality
||x(t)|| ≤ γ||x0 || , t ≥ t0
The word uniformly refers to the fact that γ must not depend on the
initial time t0 as illustrated in Figure 1.18.
27
Figure 1.18: Uniformly stability implies the γ-bound is independent of t0
Theorem 1.4.2. [9] The linear state equation (1.15) is uniformly stable if
and only if there exists γ such that
||Φ(t, τ )|| ≤ γ
for all t and τ such that t ≥ τ , where Φ(t, τ ) is the state transition matrix
of the solution and where we know that the solution of equation (1.15) is given
by
x(t) = Φ(t, τ )x(τ )
Theorem 1.4.3. [9] The linear state equation (1.15) is uniformly exponentially stable if there exist two constants γ &gt; 0 and λ &gt; 0 such that
||φ(t, τ )|| ≤ γe−λ(t−τ )
for all t and τ such that t ≥ τ .
both γ and λ are independent on the initial time t0 as illustrated in Figure
1.19.
Figure 1.19: A decaying exponential bound independent of t0
28
Theorem 1.4.4. [9] If there exists a constant α &gt; 0 such that ||A(t)|| ≤ α
for all t.Then,the linear state equation (1.15) is uniformly exponentially
stable if and only if there exists a constant β &gt; 0 such that
Zt
||Φ(t, σ) dσ|| ≤ β , ∀ t ≥ τ
τ
Proof. Assume the state equation is uniformly exponentially stable,then under theorem (1.4.3) there exist γ, λ &gt; 0 such that
||Φ(t, σ)|| ≤ γe−λ(t−σ) , ∀ t ≥ σ
Then
Zt
Zt
||Φ(t, σ)||dσ ≤
τ
γe−λ(t−σ) dσ
τ
γ
(1 − e−λ(t−τ ) )
λ
≤ β
=
where β =
1.4.3
γ
λ
and for all t ≥ τ .
Lyapunov Direct Method
In 1892, Lyapunov showed that other functions could be used to determine
stability of an equilibrium points.He said, that if Φ(t, x) is a solution of the
autonomous (1.14) then, the derivative of the Lyapunov function V (x) along
the trajectories of (1.14) which denoted by
V̇ (x) =
∂V
f (x)
∂ xi
should be dependent on the system’s equation.In addition, if the solution
Φ(t, x) starts at initial state x at time t = 0, then it should be
V̇ (x) =
d
V (Φ(t, x))
dt
&lt;0
t=0
Therefore, V (x) will decrease along the solution of (1.14).
29
Definition 1.4.2. [1] Let V : D → R be a continuously differentiable function defined in a domain D ⊂ Rn that contains the origin.The derivative of
V along the trajectories of (1.14), denoted by V̇ (x), is given by
V̇ =
n
X
∂V
i=1
∂xi
ẋi =
n
X
∂V
i=1

=
∂V
∂x1
∂V
∂x2
&middot;&middot;&middot;
∂V
∂xi




∂xi
f1 (x)
f2 (x)
..
.
fi (x)


∂V

f (x)
 =
∂x

fn (x)
The derivative of V along the trajectories of the system is dependent on the
system’s equation.
If φ (t, x) is the solution of (1.14) that start at initial state x at time t = 0,
then
d
V (φ (t, x))
= V 0 (x)f (x)
V̇ (x) =
dt
t=0
Therefore, if V̇ (x) is negative,then V will decrease along the solution φ (t, x0 )
through x0 = x(t0 ) at t0 = 0 .
Note that,
∂V ∂φ dt ∂φ dx
∂V dφ
dV
=
=
+
dt
∂φ dt t=0
∂φ ∂t dt ∂x dt t=0
Since t = 0 imply that dt = 0,
=0
z}|{
dV
∂V dt ∂V dx
=
+
dt
∂t dt
∂x dt
Hence,
dV
∂V dx
=
= V 0 (x)f (x)
dt
∂x dt
Theorem 1.4.5. [1] Let x = 0 be an equilibrium point for (1.14) and D ⊂
Rn be a domain containing x = 0. Let V : D → Rn be a continuously
differentiable function, such that
V (0) = 0 ; V (x) &gt; 0 ∀x ∈ D, x 6= 0
30
(1.16)
V̇ (x) ≤ 0 ∀ x ∈ D
(1.17)
then, x(t) = 0 is stable. Moreover, if
V̇ (x) &lt; 0 x ∈ in D , x 6= 0
(1.18)
Then x = 0 is asymptotically stable.
Definition 1.4.3. [Positive Definite Functions]
• A function V (x) satisfying condition (1.16); that is, V (0) = 0 and
V (x) &gt; 0 for x 6= 0 is said to be positive definite .
• A function V (x) satisfying the weaker condition V (x) ≥ 0 for x 6= 0
is said to be positive semi-definite.
• A function V (x) is said to be negative definite or semi-definite if −V (x)
is positive definite or positive semi-definite, respectively.
Theorem 1.4.6. [1] We say that the origin is stable if there is a continuously
differentiable positive definite function V (x) so that V̇ (x) is negative semidefinite, and it is asymptotically stable if V̇ (x) is negative definite .
1.4.4
Domain of Attraction
Consider the autonomous system
ẋ = f (x)
(1.19)
where f : D 7→ Rn is a locally Lipschitz.
When the origin x = 0 is asymptotically stable, we are often interested in
determining how far from the origin the trajectory can be converges to the
origin as t → ∞ .This is what called the region of attraction (also called
region of asymptotic stability ,domain of attraction , or basin).
The region of attraction is defined as the set of all points x such that
lim Φ (t, x) = 0 ,where γ (t, x) be the solution of (1.19). Finding region of att→∞
traction analytically might be difficult.However, Lyapunov functions can be
used to estimate the region of attraction by finding sets contained in the region of attraction.If there is a Lyapunov function that satisfies the conditions
31
of asymptotic stability over a domain D and if Ωc = {x ∈ Rn : V (x) ≤ c}
is bounded and contained in D , then Ωc is a positive invariant set and every trajectory starting in Ωc remains in Ωc and approaches the origin as
t → ∞ .Thus, Ωc is an estimate of the region of attraction.
In the case that for any initial state x, the trajectory Φ (t, x) approaches the
origin as t → ∞ .Then, if an asymptotically stable equilibrium point at the
origin has this property,it is said to be globally asymptotically stable.
The problem is that for large c, the set Ωc might not be bounded.So, to ensure that Ωc is bounded for all values of c &gt; 0, we need for an extra condition
that is
V (x) → ∞
as x → ∞
A function satisfying this condition is said to be radially unbounded.
Theorem 1.4.7 (Barbashin-Krasovskii theorem). [1] Consider the nonlinear dynamical system (1.19) and assume that there exists a continuously
differentiable candidate function V : Rn 7→ R such that
V (0) = 0
V (x) &gt; 0, x ∈ Rn , x 6= 0
V 0 f (x) &lt; 0, x ∈ Rn , x 6= 0
V (x) ⇒ ∞ as||x|| 7→ ∞
(1.20)
(1.21)
(1.22)
(1.23)
Then the zero solution x(t) ≡ 0 is globally asymptotically stable. The radial
unbounded condition (1.23) ensures that the system trajectories move from
one energy surface to an inner energy surface along the system trajectories
cannot drift away from the system equilibrium.
Example 1.4.1. Consider the function
V (x) =
x21
+ x22
(1 + x21 )
32
Figure 1.20: Lyapunov surfaces for V (x) =
x21
(1+x21 )
+ x22 ≤ c
Figure (1.20) shows that the surface V (x) = c for various values of c.For
small c,the surface V (x) = c is closed; Ωc is bounded since it is contained in
a closed ball Br for some r &gt; 0 and Ωc = {x ∈ Rn : V (x) ≤ c} .Thus, as
c increases, V (x) = c is open and Ωc is unbounded.
For Ωc to be in the interior of a ball Br , c must satisfy
c &lt; inf V (x)
|x|≥r
If
γ = lim inf V (x) &lt; ∞
r7→∞ |x|≥r
Then Ωc will be bounded only if c &lt; γ.
For our example ,
x21
x21
x21
2
+ x2 = lim min 2
γ = lim min
= lim
=1
r7→∞ |x|=r
r7→∞ |x|=r x1 + 1
|x1 |7→∞ x2
x21 + 1
1+1
Hence, Ωc is bounded only for c &lt; 1. An extra condition to ensure that Ωc is
bounded for all c &gt; 0 is
lim V (x) = ∞
||x||→∞
Therefore, we say that V (x) is radially unbounded.
MATLAB.m 2 (Example 1.4.1).
syms x1 x2 c
eqn = (((x12 ) / (1 + (x12 ))) + x22 ) − c
RelativeT Ox2 = solve(eqn,0 x20 )
x1 = −2 : 0.1 : 2
33
f orc = −1 : 0.1 : 1
hold on
plot(x1, eval(RelativeT Ox2))
end
Example 1.4.2. Consider the second-order system
x˙1 = x2
x˙2 = −h(x1 ) − ax2
where a &gt; 0 , h (&middot;) is locally Lipschitz , h (0) = 0 and yh (y) &gt; 0 for all
y 6= 0 , y ∈ (−b, c) for some positive constants b and c
The Lyapunov function candidate is,
V (x) =
x1
1
1
γ a x21 + δx22 + γx1 x2 + δ ∫ h (y1 ) dy1
2
2
0
where its derivative is,
V̇ (x) = − [a δ − γ] x22 − γx1 h (x1 )
Choosing δ &gt; 0 and 0 &lt; γ &lt; a δ ensures that V (x) is positive definite
for all x ∈ R2 and radially unbounded and V̇ (x) is negative definite for all
x ∈ R2 .Therefore, the origin is globally asymptotically stable.
1.4.5
A class of scalar functions V (x) for which sign definiteness can be easily
checked is the class of functions of the quadratic form
T
V (x) = x P x =
n
n X
X
pij xi xj
i=1 j=1
where P is a real symmetric matrix.
Definition 1.4.4. we say that V (x) is positive definite (positive semidefinite) if and only if all the eigenvalues of P are positive (nonnegative)
and it is positive (nonnegative) definite if and only if all the leading principle
minors of P are positive (nonnegative).
34
If V (x) = xT P x is positive definite(positive semi-definite), we say that
the matrix P is positive definite (positive semi-definite) and we write P &gt;
0 (P ≥ 0).
Example 1.4.3. Consider V (x) = ax21 + 2x1 x3 + ax22 + 4x2 x3 + ax23


 
a 0 1
x1


 

 

 = xT P x



0
a
2
x
= x1 x2 x3
2


 

 

x3
1 2 a
2
2
√ of P are a, a , and a (a − 5).Therefore, V(x)
is positive definite if a &gt; 5 .
For negative definiteness, the leading principle minors of −P should be positive; that is, the leading principle minors of P √should have alternating
signs.Therefore, V (x) is negative definite
√ if a &lt; − 5 . It can be seen that
√
V (x) is positive√semi-definite
if
a
≥
5, negative semi-definite if a ≤ − 5
√ and for a ∈ − 5, + 5 , V (x) is indifinte.
Remark:
Lyapunov Theorem can be applied without solving the differential equation
(1.14).There are natural Lyapunov function candidates like energy functions
in electrical or mechanical systems.
Example 1.4.4. Consider the first-order differential equation
ẋ = −g (x)
Where g (x) is locally lipschitz on (−a, a) and satisfies
g (0) = 0 ; xg (x) &gt; 0 ∀ x 6= 0 , x ∈ (−a, a)
This system has an isolated equilibrium point at the origin.
Consider the function
x
V (x) = ∫ g (&micro; ) d&micro;
0
Over the domain D = (−a, a), V (x) is continuously differentiable, V (0) = 0
and V (x) &gt; 0 ∀ x 6= 0 .Thus, V (x) is a valid Lyapunov function candidate.To see whether or not V (x) is indeed a Lyapunov function, we calculate
35
its derivative along the trajectories of the system
V̇ (x) =
∂V
(−g (x)) = −g 2 (x) &lt; 0 ∀ x ∈ D − {0}
∂x
Thus,by Theorem (1.4.5), we conclude that the origin is asymptotically stable.
Example 1.4.5. Consider the pendulum equation without friction
ẋ1 = x2
ẋ2 = −
(1.24)
g l
sin x1
(1.25)
A natural Lyapunov function candidate is the energy function
g 1
V (x) =
(1 − cos x1 ) + x22
l
2
So,V (0) = 0 and V (x) is positive definite over the domain −2π &lt; x1 &lt; 2π .
The derivative of V (x) along the trajectories of the system is given by
g g g V̇ (x) =
x1 sin x1 + x2 x2 =
x2 sin x1 −
x2 sin x1 = 0
l
l
l
Thus, conditions (1.16) and (1.17) of Theorem (1.4.5) are satisfied and
we conclude that the origin is stable.
Since V̇ (x) = 0 , we can conclude that the origin is not asymptotically stable,
because trajectories starting on a Lyapunov surface V (x) = c (remain on
the same surface for all future time).
Example 1.4.6. Consider the pendulum equation with friction
x˙1 = x2
x˙2 = −
Assume that
V (x) =
(1.26)
g l
g sin x1 −
k
m
(1 − cos x1 ) +
l
as a Lyapunov function candidate .Then,
V̇ =
g l
x2
1 2
x
2 2
x˙1 sinx1 + x2 x˙2 = −
36
(1.27)
k
m
x22
V̇ (x) is negative semi-definite.It is not negative definite because V̇ (x) = 0
∀ x2 = 0, irrespective of the value of x1 ; that is, V̇ (x) = 0 along the x1 axis.Therefore, the origin is stable.Unfortunately, sketching the phase portrait
of the pendulum equation when k &gt; 0 , reveals that the origin is asymptotically stable.
Thus, the energy Lyapunov function fails to show this fact but a theorem due
to LaSalle will enable us to arrive at a different conclusion using the energy
Lyapunov function.
Let us look for a Lyapunov function V(x) that would have a negative definite
V̇ (x).From the energy Lyapunov function, let us replace the term 12 x22 by the
more general quadratic form 21 xT P x for some 2 &times; 2 positive definite matrix
P,
g 1 T
V (x) = x P x +
(1 − cos x1 )
2
l

 

x
p
p
1
11
12
1
 + g (1 − cos x1 )
 

x1 x2
=
2
l
x2
p21 p22
for the quadratic form 12 xT P x to be positive definite, the elements of the
matrix P must satisfy
p11 &gt; 0 ; p22 &gt; 0 ; p11 p22 − p212 &gt; 0
The derivative V̇ (x) is given by
g g sin x1 ) x2 + (p12 x1 + p22 x2 ) (−
sin x1
V̇ (x) = (p11 x1 + p12 x2 +
l
l
k
−
x2 )
m
g g k
=
(1 − p22 ) x2 sin x1 −
p12 x1 sin x1 + p11 − p12
x1 x2
l
l
m
k
+
p12 − p22
x22
m
Now, we want to choose p11 , p12 and p22 such that V̇ (x) is negative definite
and as the terms x2 sin x1 and x
k
by taking p22 = 1 and p11 = m p12 .So, p12 must satisfy 0 &lt; p12 &lt; m for
V (x) to be positive definite .
37
1
2
k
m
then, V̇ (x) is given by
1 g k
1
k
V̇ (x) = −
x1 sin x1 −
x22
2 l
m
2 m
Let us take p12 =
The term x1 sin x1 &gt; 0 , ∀ 0 &lt; |x1 | &lt; π .Taking D = {x ∈ R2 : x1 &lt; π}, we
see that V (x)is positive definite and V̇ (x) is negative definite over D.Thus,
by Theorem (1.4.5), we conclude that the origin is asymptotically stable.
1.5
Linear Systems and Linearization
Definition 1.5.1. Consider a second order nonlinear dynamical system of
represented in the vector form
ẋ = f (x(t))
(1.28)
then, we say x̄ is an equilibrium point (some times referred to as singular
point or critical point) when ẋ = 0; that is, when f (x̄) = 0 of (1.28).
Definition 1.5.2. Linearization is a technique that let f (x) has a Taylor
series expansion of the first order degree plus higher order term in neighbourhood of the equilibrium point x = x̄.
Thus, equation (1.28) can be written in the neighbourhood of the equilibrium
point x = x̄ as:
f (x)|x=x̄ = f (x̄) +
∂f
(x̄) (x − x̄) + H.O.T
∂x
Now, linearizing means that we have to remove the higher order terms.Hence
ẋ = f (x) ≈ f (x̄) +
∂f
(x̄)
∂x
Note that, usually x̄ = 0 which means the equilibrium point is the origin (i.e
f (x) ≡ 0 if x̄ = 0).
The nonlinear system (1.28) is represented by two scalar differential equations
ẋ1 = f1 (x1 , x2 )
ẋ2 = f2 (x1 , x2 )
38
(1.29)
(1.30)
let p = (p1 , p2 ) be an equilibrium points of the nonlinear system (1.29),(1.30),
and suppose that the functions f1 , f2 are continuously differentiable.
Linearization requires expanding the right-hand side of (1.29),(1.30) into its
Taylor series about the point (p1 , p2 ), we obtain
ẋ1 = f1 (p1 , p2 ) + a11 (x1 − p1 ) + a12 (x2 − p2 ) + H.O.T
x˙2 = f2 (p1 , p2 ) + a21 (x1 − p1 ) + a22 (x2 − p2 ) + H.O.T
where,
a11 =
∂f1 (x1 , x2 )
∂f1 (x1 , x2 )
|x1 =p1 ,x2 =p2 , a12 =
|x1 =p1 ,x2 =p2
∂x1
∂x2
a21 =
∂f2 (x1 , x2 )
∂f2 (x1 , x2 )
|x1 =p1 ,x2 =p2 , a22 =
|x1 =p1 ,x2 =p2
∂x1
∂x2
H.O.T denotes terms of the form (x1 − p1 )2 , (x2 − p2 )2 , (x1 − p1 )x(x2 − p2 ) ,
and so on.
Since (p1 , p2 ) is an equilibrium point, we have
f1 (p1 , p2 ) = f2 (p1 , p2 ) = 0
Since we are interested in the trajectories near (p1 , p2 ), we define
y1 = x1 − p1 , y2 = x2 − p2
and rewrite the state equation as
y˙1 = x˙1 = a11 y1 + a12 y2 + H.O.T
y˙2 = x˙2 = a21 y1 + a22 y2 + H.O.T
Also, we may cancel the higher-order terms and approximate the nonlinear state equation by the linear state equation
y˙1 = a11 y1 + a12 y2
y˙2 = a21 y1 + a22 y2
Rewriting this equations in a vector form, we obtain
ẏ = Ay
39
where,
&quot; ∂f1
a11 a12
1
A=
= ∂x
∂f2
a21 a22
∂x1
∂f1
∂x2
∂f2
∂x2
#
=
x=p
∂f
∂x
x=p
] is called the Jacobian matrix of f (x) and A is the JacoThe matrix [ ∂f
∂x
bian matrix evaluated at the equilibrium point p.
Example 1.5.1. Consider The state-space model given in Example (1.2.1).
The Jacobian matrix of the function f (x) is given by,
0
∂f
−0.5h (x1 ) 0.5
=
−0.2
−0.3
∂x
where,
0
h (x1 ) =
dh
= 17.76 − 207.58x1 + 688.86x21 − 905.24x31 + 418.6x41
dx1
Evaluating the Jacobian matrix at the equilibrium points Q1 = (0.063, 0.758)
Q2 = (0.285, 0.61), and Q3 = (0.884, 0.21) respectively, yields the three matrices
−3.598 0.5
A1 =
, Eigenvalues : −3.57, −0.33
−0.2 −0.3
1.82 0.5
A2 =
, Eigenvalues : 1.77, −0.25
−0.2 −0.3
−1.427 0.5
A3 =
, Eigenvalues : −1.33, −0.4
−0.2 −0.3
Thus, Q1 is a stable node, Q2 is a saddle point, and Q3 is a stable node.
Consider the nonlinear system (1.14)
ẋ = f (x)
where f : D 7→ Rn is a continuously differentiable map from a domain
D ⊂ Rn into Rn .Suppose that the origin x = 0 is in the interior of D and is
an equilibrium point for the system; that is, f (0) = 0.
fi (x) = fi (0) +
40
∂fi (zi )
x
∂x
where zi is a point on the line segment connecting x to the origin.
Since f (0) = 0, we can write fi (x) as
∂fi (0)
∂fi (zi ) ∂fi (0)
∂fi (zi )
x=
x+
−
x
fi (x) =
∂x
∂x
∂x
∂x
which is corresponding to
f (x) = Ax + g(x)
where
∂fi (zi ) ∂fi (0)
∂f (0)
, gi (x) =
−
x
A≡
∂x
∂x
∂x
The function gi (x) satisfies
∂fi (zi ) ∂fi (0)
||gi (x)|| ≤
||x||
−
∂x
∂x
by continuity of ∂∂ fx , we have
||g(x)||
7→ 0 as ||x|| 7→ 0
||x||
This suggest that in a small neighbourhood of the origin we can approximate
the nonlinear system (1.14) by its linearization about the origin
ẋ = Ax
, where A =
∂f
∂x
x=0
The linear invariant system
ẋ = Ax(t)
(1.31)
has an equilibrium point at the origin.The equilibrium point is isolated if and
only if det(A) 6= 0.If det(A) = 0, the matrix A has nontrivial null space.Every
point of the null space of A is an equilibrium point for the system (1.31).In
other words, if det(A) = 0, the system has an equilibrium subspace.
Stability properties of the origin can be characterized by the locations of the
eigenvalues of the matrix A.
The solution of (1.31) for a given initial state x(0) is given by
x(t) = eAt x(0)
(1.32)
The structure of eAt can be understood by considering the Jordan form of A.
41
Theorem 1.5.1. The equilibrium point x = 0 of (1.31) is stable if and only
if all eigenvalues of A satisfy Reλi ≤ 0 and every eigenvalue with Reλi = 0
has an associated Jordan block of order one(i.e nonpositive real part).The
equilibrium point x = 0 is (globally) asymptotically stable if and only if
all eigenvalues of A satisfy Reλi &lt; 0 or Hurwitz.
Proof: From (1.32) we can see that the origin is stable if and only
if eAt is a bounded function of t , ∀t ≥ 0; that is, if there exists α &gt; 0
such that ||eAt || &lt; α, t ≥ 0.If the eigenvalue of A is in the open right-half
plane,the corresponding exponential term eλi t in (1.33) will grow unbounded
as t 7→ ∞.Therefore, we must restrict the eigenvalue to be in the closed lefthalf plane.
Notation:
When all eigenvalues of A satisfy Re(λi ) &lt; 0, A is called a stability matrix
or a Hurwitz matrix.
For any matrix A there is a nonsingular matrix P that transforms A into
its Jordan form; that is,
P −1 AP = J = block diag [J1 , J2 , &middot; &middot; &middot; , Jm ]
where Ji is the Jordan block associated with the
A Jordan block of order m takes the form

λi 1
0 &middot;&middot;&middot; &middot;&middot;&middot;
 0 λi
1
0 &middot;&middot;&middot;

 ..
...
.
Ji = 
..
 ...
.

.
..
 ..
.
0 &middot;&middot;&middot; &middot;&middot;&middot; &middot;&middot;&middot; 0
eigenvalue λi of A.

0
0

.. 
.

0


1
λi m&times;m
Therefore,
At
e
(Jt)
= Pe
P
−1
=
ni
m X
X
tj−1 e(λi t) pij (A)
(1.33)
i=1 j=1
where m is the number of the distinct eigenvalues of A, ni is the order of the
ith Jordan block and pij (A) are constant matrices (refer to the Appendix).
42
1.5.1
Investigating the Asymptotic Stability of the LTI
dynamical system (1.31) via Theorem (1.4.5)
Consider a quadratic Lyapunov function candidate
V (x) = xT P x
where P is a real symmetric positive definite matrix.
the derivative of V along the trajectories of the linear system (1.31) is given
by
V̇ (x) = xT P ẋ + ẋT P x = xT (P A + AT P )x = −xT Qx
where Q is a symmetric matrix defined by
P A + AT P = −Q
(1.34)
if Q is positive definite so that V 0 (x)f (x) = −xT Qx &lt; 0, we conclude by
Theorem (1.4.5) that the origin is asymptotically stable; i.e., Reλi &lt; 0 for all
eigenvalues of A.So, we had take the procedure of Lyapunov method, where
we choose V (x) to be positive definite and then check the negative definite
of V (x).
Suppose we start by choosing Q as a real symmetric positive definite matrix
and solve (1.34) for P .If (1.34) has a positive definite solution, then we
conclude that the origin is asymptotically stable.Equation (1.34) is called
the Lyapunov equation.
Theorem 1.5.2. [1] A matrix A is a stability matrix (Hurwitz matrix);
that is, Re(λi ) &lt; 0 for all eigenvalues of A,if and only if for any given
positive definite symmetric matrix Q there exists a positive definite
symmetric matrix P that satisfies the Lyapunov equation (1.34).Moreover,
if A is a stability matrix, then P is the unique solution of (1.34).
Proof. from Theorem (1.4.5) with Lyapunov function V (x) = xT P x, assume
that all eigenvalues of A satisfy Re(λi ) &lt; 0 and consider the matrix P ,
defined by
Z∞
T
(1.35)
P = eA t Q eAt dt
0
The integrand is a sum of terms of the form tk−1 eλi t , where Reλi &lt; 0.Therefore,
the integral exists.The matrix P is symmetric and positive definite.To show
43
that it is symmetric and positive definite,suppose that it is not.Hence,there
is a vector x 6= 0 such that xT P x = 0.
Z∞
T
x Px = 0 ⇒
e(A
0
(At)x
⇒e
T t)
Qe(At) x dt = 0
≡ 0 , ∀t ≥ 0 ⇒ x = 0
This contradiction shows that P is positive definite.Substitution of (1.35) in
the left-hand side of (1.34) yields:
P A + AT P =
Z∞
(AT t)
e
Qe(At) A dt +
0
Z∞
T
AT e(A t)Qe(At) dt
0
Z∞
=
d (AT t) (At) T
e
Qe
dt = e(A t) Qe(At)
dt
∞
= −Q
0
0
which means that P is indeed a solution of (1.35) if and only if λi &lt; 0.
To show that P &gt; 0 (positive definite and symmetric), let C ∈ Rn&times;m be
such that Q = C T C,
T
Z∞
x Px =
T AT t
x e
T
At
Z∞
C Ce x dt =
0
||CeAt x||22 dt
0
which shows that P ≥ 0 and P = P T .
Example 1.5.2. Let
0 −1
1 0
p11 p12
A=
;Q =
;P =
1 −1
0 1
p21 p22
since P is a positive symmetric definite matrix.Therefore, P12 = P21
Using Lyapunov equation: P A + AT P = −Q, which can be written as

   
0
2
0
p11
−1
−1 −1 1  p12  =  0 
0 −2 −2 p22
−1
44
the unique solution of this equation is given by
   
p11
1.5
1.5
0.5
p12  = 0.5 ⇒ P =
0.5 1.0
p22
1.0
• Lyapunov equation can be used to test whether or not a matrix A is a
stability matrix(Hurwitz matrix) as a second method to calculate the
eigenvalues of A.
One starts by choosing a positive definite matrix Q (most often Q = I)
and then solve the Lyapunov equation (1.34) for P .If the equation have
a positive definite solution, we conclude that A is a stability matrix
(Hurwitz); otherwise, it is not.
MATLAB.m 3 (Example 1.5.2).
clear
A=[0 -1 ; 1 -1]
Q=[1 0 ; 0 1]
P=lyap(A,Q)
1.5.2
Lyapunov Indirect Method
Theorem 1.5.3. [4] Let x = 0 be an equilibrium point of the nonlinear
system
ẋ = f (x)
where f : D 7→ Rn is continuously differentiable and D is a neighbourhood of
the origin. Let
∂f (x)
A=
∂x x=0
Then
• The origin is asymptotically stable if Re(λi ) &lt; 0 for all eigenvalues
of A.
• The origin is unstable if ∃ λi such that Re(λi ) &gt; 0.
45
Proof. Let A be a stability matrix (Hurwitz matrix), then by Theorem
(1.5.2), we know that for any positive definite symmetric matrix Q, the
solution P of the lyapunov equation (1.34) is positive definite .We use
V (x) = xT P x
as a lyapunov function candidate for the nonlinear system.The derivative of
V (x) along the trajectories of the system is given by
V̇ (x) = xT P f (x) + f T (x)P x
= xT P [Ax + g(x)] + [xT AT + g T (x)]P x
= xT (P A + AT P )x + 2xT P h(x)
= −xT Qx + 2xT P g(x)
the first term of the right-hand side is negative definite, while the second
term is indefinite.The function g(x) satisfies:
kg(x)k2
7→ 0 as kxk2 7→ 0
kxk2
Therefore, for any γ &gt; 0 there exists r &gt; 0 such that
||g(x)||2 &lt; γ||x||2 , ∀ ||x||2 &lt; r
Hence,
V̇ (x) &lt; −xT Qx + 2γ||P ||2 ||x||22
but
xT Qx ≥ λmin (Q)||x||22
where λmin (&middot;) denoted the minimum eigenvalue of a matrix .Note that
λmin (Q) is real and positive since Q is symmetric and positive definite.Thus
V̇ (x) &lt; − [λmin (Q) − 2γ||P ||2 ] ||x||22 , ∀||x||2 &lt; r
(Q)
ensures that V̇ (x) is negative definite.By Theorem
Choosing γ &lt; 12 λmin
||P ||2
(1.4.5), we conclude that the origin is asymptotically stable.
Theorem (1.5.3) provides us with a simple procedure for determining
stability of an equilibrium point at the origin.We calculate the Jacobian
matrix
∂f
A=
∂x x=0
and test its eigenvalues.
46
• If Re(λi ) &lt; 0 ∀ i (or Reλi &gt; 0), we conclude that the origin is asymptotically stable (or unstable ,respectively).
• When Re(λi ) &lt; 0 ∀ i, we can also find a quadratic Lyapunov function
for the system that will work locally in some neighbourhood of the
origin.The Lyapunov function is the quadratic form V (x) = xT P x,
where P is the solution of the Lyapunov equation (1.34) for any positive
definite symmetric matrix Q.
• When Re(λi ) ≥ 0 ∀ i, in this case, linearization fails to determine stability of the equilibrium point.
To show local exponential stability it needs noted that
λmin (P )||x||22 ≤ V (x) ≤ λmax (P )||x||22
Remark:
If the equilibrium point is not the origin (x0 6= 0) then Theorem (1.5.3) holds
replaced by A = ∂f
, where x = xe is not the non-origin
with A = ∂f
∂x x=0
∂x x=xe
equilibrium point .
Example 1.5.3. Consider the pendulum equation
ẋ1 = x2
g
k
ẋ2 = − sin x1 − x2
l
m
has two equilibrium points at (x1 , x2 ) = (0, 0) and (x1 , x2 ) = (π, 0).
Using linearization,the Jacobian matrix is given by:
&quot;
# ∂f1
∂f1
∂f
0
1
∂x1
∂x2
= ∂f2 ∂f2 =
k
− gl cos x1 − m
∂x
∂x1
∂x2
Evaluating the Jacobian at x = 0
∂f
A=
∂x
x=0
0
1
=
k
− gl − m
The eigenvalues of A are :
s
λ1,2 = −
k
1
&plusmn;
2m 2
47
k 2 4g
−
m
l
If the eigenvalues satisfy Re(λi ) &lt; 0, the equilibrium point at the origin is
asymptotically stable.Meanwhile, if the friction (k = 0) then both eigenvalues
are on the imaginary axis.In this case, we can not determine stability of the
origin through linearization.
MATLAB.m 4 (Example 1.5.3).
clear
syms x y g l k m
A=jacobian([ y , (-(g/l) * sin(x)) - ((k/m)*y) ] , [x , y])
x=0
y=0
Ax0 = eval(A)
eig(Ax0)
1.6
The Invariant Principle(LaSalle’s Invariance Principle)
In our study of the pendulum equation with friction (look at example (1.4.6)),
we saw that the energy Lyapunov function fails to satisfy the asymptotic
k
x22 is only
stability condition of Theorem (1.4.5) because V̇ (x) = − m
negative semidefinite.Notice that, V̇ (x) is negative everywhere except on
the line x2 = 0 where V̇ (x) = 0.
For the system to maintain the V̇ (x) = 0 condition, the trajectory of the
system must be confined to the line x2 = 0 .
If in a domain about the origin we can find a Lyapunov function whose
derivative along the trajectories of the system is negative semidefinite
and if we can establish that no trajectory can stay identically at points where
V̇ (x) = 0 except at the origin, then the origin is asymptotically stable.
This idea follows from LaSalles invariance principleand before we state
LaSalles invariance principle, we need to introduce some definitions:
• Positive limit point: A point p is said to be a positive limit point
of x (t) if there is a sequence {tn } with tn → ∞ as n → ∞ such that
x (tn ) → p as n → ∞ .
• Positive limit set: The set of all positive limit points of x (t) is called
the positive limit set of x (t).
48
• Invariant set: A set M is said to be an invariant set with respect to
ẋ = f (x)
(1.36)
if,
x (0) ∈ M ⇒ x (t) ∈ M ,
f or all f uture time (i.e. ∀ t ∈ R )
That is, if a solution belongs to M at some time instant, then it belongs
to M for all future and past time.
• Positively invariant set: A set M is said to a positively invariant
set if
x (0) ∈ M ⇒ x (t) ∈ M , ∀ t ≥ 0
The asymptotically stable equilibrium is the positive limit set of every
solution starting sufficiently near the equilibrium point .
• Stable limit cycle: The stable limit cycle is the positive limit set of
every solution sufficiently near the limit cycle.The solution approaches
the limit cycle as t → ∞ .The equilibrium point and the limit cycle are
invariant sets,since any solution starting in either set remains in the set
for all t ∈ R.
Theorem 1.6.1. (LaSalle’s Theorem) Let Ω ∈ D be a compact set that
is positively invariant set with respect to (1.36).Let V : D → R be a continuously differentiable function such that V̇ (x) ≤ n0, x ∈ Ω .Let E be
o the set
of all points in Ω where V̇ (x) = 0 ;that is, E = x ∈ Ω : V̇ (x) = 0 and let
M be the largest invariant set in E.If x(0) ∈ E, then every solution starting
in Ω approaches M as t → ∞ .
Unlike Lyapunov Theorem, Theorem (1.6.1) does not requires the function
V (x) to be positive definite.
Our interest is to show that x (t) → 0 as t → ∞.So we need to establish
that the largest invariant set in E is the origin (x = 0).
This is shown by note that no solution can stay identically in E other than
the trivial solution x (t) = 0 .
Corollary 1.6.1. Let x = 0 be an equilibrium point for (1.36).Let V : D →
R be a continuously differentiable positive definite function on a domain
49
D
x = 0, such that V̇ (x) ≤ 0 in D and let ω =
n containing the origin
o
x ∈ D : V̇ (x) = 0 .Suppose that no solution can stay identically in ω other
than the trivial solution {0}.Then, the origin is asymptotically stable.
Corollary 1.6.2. Let x = 0 be an equilibrium point for (1.36).Let V : Rn →
R be a continuously differentiable, radially unbounded,
positive definite
n
o funcn
n
tion, such that V̇ (x) ≤ 0 , ∀ x ∈ R and let Ω = x ∈ R : V̇ (x) = 0 .Suppose
that no solution can stay identically in Ω other than the trivial solution.Then,
the origin is globally asymptotically stable.
Example 1.6.1. Consider the system
x˙1 = x2
x˙2 = −g(x1 ) − h(x2 )
where g (&middot;) and h (&middot;) are locally Lipschitz and satisfy
g (0) = 0 , yg (y) &gt; 0 ∀ y 6= 0 , y ∈ (−a, a)
h (0) = 0 , yh (y) &gt; 0 ∀ y 6= 0 , y ∈ (−a, a)
The system has an isolated equilibrium point at the origin. A Lyapunov
function candidate may be taken as the energy-like function
x1
V (x) = ∫ g (y) dy +
0
1 2
x
2 2
n
Let D = {x ∈ R : −a &lt; xi &lt; a } . V (x) is positive definite in D.The
derivative of V (x) along the trajectories of the system is given by
V̇ (x) = g (x1 ) x2 + x2 [−g (x1 ) − h (x2 )] = −x2 h (x2 ) ≤ 0
Thus, V̇ (x) is negative semidefinite
.
n
o
To characterize the set Ω = x ∈ D : V̇ (x) = 0 note that,
V̇ (x) = 0 ⇒ x2 h (x2 ) = 0 ⇒ x2 = 0 , since − a &lt; x2 &lt; a
Hence, S = {x ∈ D : x2 = 0 } .Suppose x (t) is a trajectory that belongs to
S.
x2 (t) = 0 ⇒ ẋ2 (t) = 0 ⇒ g (x1 (t)) = 0 ⇒ x1 (t) = 0
Therefore, the only solution that can stay identically in S
solution x (t) = 0 .Thus, the origin is asymptotically stable.
50
is the trivial
Example 1.6.2. Consider the system of Example (1.6.1) but this time with
a = ∞ and assume that g (&middot;) satisfies the additional condition:
y
∫ g (z) dz →
∞ as |y| → ∞
0
The Lyapunov function
x1
1
V (x) = ∫ g (y) dy + x22
2
0
is radialy unbounded and it can be shown that V̇ (x) ≤ 0 in R2 , and the set
n
o
S = x ∈ R2 : V̇ (x) = 0
⇒ S = x ∈ R 2 : x2 = 0
contains no solutions other than the trivial solution.Hence, the origin is globally asymptotically stable.
Example 1.6.3. Consider the first-order system ẏ = ay + u with feedback
control law
u = −ky
;
k̇ = γy 2 ; γ &gt; 0
Setting:
x1 = y , x2 = k ⇒ x˙1 = ẏ
and x˙2 = k̇
Then, the closed-loop system is represented by
⇒
⇒
x˙1 = − (x2 − a) x1
x˙2 = γx21
The line x1 = 0 is an equilibrium set for this system. We want to show that
the trajectory of the system approaches this equilibrium set as t → ∞ , which
means that the feedback controller success in regulating y to zero.
1
Consider the Lyapunov function candidate V (x) = 12 x21 + 2γ
(x2 − b)2
where b &gt; a .The derivative of V along the trajectories of the system is given
by
1
V̇ (x) = x1 ẋ1 + (x2 − b)x˙2 = −x21 (b − a) ≤ 0
γ
Hence, V̇ (x) ≤ 0 , and since V (x) is radially unbounded, the set Ωc =
{x ∈ R2 : V (x) ≤ c} is a compact, positively invariant set. Thus, all condition of Theorem (1.6.1) are satisfied.
51
The set E is given by E = {x ∈ Ωc : x1 = 0 }
since any point on the line x1 = 0 is an equilibrium point, E is an invariant
set.Therefore, M = E and from Theorem (1.6.1), we conclude that every
trajectory starting in Ωc approaches E as t → ∞ ; that is, x1 (t) → 0 as t →
∞ .
Moreover, since V (x) is radially unbounded, this conclusion is global; that
is, it holds for all initial conditions x (0) because for any x (0) the constant c
can be chosen large enough that x (0) ∈ Ωc .
1.7
The Partial Stability of Nonlinear Dynamical Systems
Consider the nonlinear autonomous dynamical system:
ẋ1 = f (x1 (t), x2 (t)) , x1 (t0 ) = x10 , ∀ t
ẋ2 = f (x1 (t), x2 (t)) , x2 (t0 ) = x20
(1.37)
(1.38)
Where x1 ∈ D ⊂ Rn1 , 0 ∈ D and x2 ∈ Rn2 such that f1 : D &times; Rn2 7→ Rn1 is
such that f1 (0, x2 ) = 0 and f1 (&middot;, x2 ) is locally Lipschitz in x1 .
Similarly, f2 : D &times; Rn2 7→ Rn2 is such that f2 (x1 , &middot;) is locally Lipschitz in x2 .
Assume the solution (x1 (t), x2 (t)) to the system (1.37), (1.38) exists and is
unique.
The following theorem introduces eight types of partial stability, that is,
stability with respect to x1 , for the nonlinear dynamical system (1.37) and
(1.38)
Theorem 1.7.1. [4] The nonlinear dynamical systems (1.37) and (1.38) are
• Lyapunov stable with respect to x1 if, for every &gt; 0 and x20 ∈
Rn2 , there exists δ = δ(, x20 ) &gt; 0 such that ||x10 || &lt; δ implies that
||x1 (t)|| &lt; for all t ≥ 0 (see Figure 1.21(a)).
52
Figure 1.21: (a) Partial Lyapunov stability with respect to x1 .(b) Partial
asymptotic stability with respect to x1 . x1 = [y1 y2 ]T , x2 = z, and x =
T T
x1 x2
• Lyapunov stable with respect to x1 uniformly in x20 , if for
every &gt; 0, there exists δ = δ() &gt; 0 such that ||x10 || &lt; δ implies that
||x1 (t)|| &lt; for all t ≥ 0 and for all x20 ∈ Rn2 .
• asymptotically stable with respect to x1 , if it is Lyapunov stable
with respect to x1 and, for every x20 ∈ Rn2 , there exists δ = δ(x20 ) &gt; 0
such that ||x10 || &lt; δ implies that lim x1 (t) = 0 (see Figure 1.21(b)).
t7→∞
• asymptotically stable with respect to x1 uniformly in x20 , if it
is Lyapunov stable with respect to x1 uniformly in x20 and there exists
δ &gt; 0 such that ||x10 || &lt; δ implies that lim x1 (t) = 0 uniformly in x10
t7→∞
and x20 for all x20 ∈ Rn2 .
• globally asymptotically stable with respect to x1 , if it is Lyapunov stable with respect to x1 and lim x1 (t) = 0 for all x10 ∈ Rn1 and
t7→∞
x20 ∈ Rn2 .
• globally asymptotically stable with respect to x1 uniformly in
x20 , if it is Lyapunov stable with respect to x1 uniformly in x20 and
lim x1 (t) = 0 uniformly in x10 and x20 for all x10 ∈ Rn1 and x20 ∈ Rn2
t7→∞
.
53
• exponentially stable with respect to x1 uniformly in x20 , if there
exist scalars α, β, δ &gt; 0 such that ||x10 || &lt; δ implies that |x1 (t)|| ≤
α||x10 ||e−βt , t ≥ 0, ∀ x20 ∈ Rn2 .
• globally exponentially stable with respect to x1 uniformly in
x20 , if there exist scalars α, β &gt; 0 such that ||x1 (t)|| ≤ α||x10 ||e−βt , t ≥
0, for all x10 ∈ Rn1 and x20 ∈ Rn2 .
The following theorem present a sufficient conditions for partial stability of nonlinear dynamical system (1.37) and (1.38) in sense of Lyapunov
candidate functions.We need now to assume that:
• V̇ = V 0 (x1 , x2 ) f (x1 , x2 )
T
def • f (x1 , x2 ) = f1T (x1 , x2 ) f2T (x1 , x2 )
• V : D &times; Rn2 7→ R
Theorem 1.7.2. [4] Consider the nonlinear dynamical system (1.37) and
(1.38).Then the following statements hold:
• Lyapunov stable with respect to x1 , If there exist a continuously
differentiable function V : D &times; Rn2 7→ R and a class K function α(&middot;)
such that
V (0, x2 ) = 0 , x2 ∈ Rn2
α(||x1 ||) ≤ V (x1 , x2 ) , (x1 , x2 ) ∈ D &times; Rn2
V̇ (x1 , x2 ) ≤ 0 , (x1 , x2 ) ∈ D &times; Rn2
(1.39)
(1.40)
(1.41)
• Lyapunov stable with respect to x1 uniformly in x20 , if there
exist a continuously differentiable function V : D &times; Rn2 7→ R and class
K functions α(&middot;), β(&middot;) satisfying (1.40), (1.41), and
V (x1 , x2 ) ≤ β(||x1 ||)
, (x1 , x2 ) ∈ D &times; Rn2
(1.42)
• asymptotically stable with respect to x1 , if there exist continuously differentiable functions V : D &times; Rn2 7→ R and W : D &times; Rn2 7→
R and class K functions α(&middot;), β(&middot;), γ(&middot;) such that Ẇ (x1 (&middot;), x2 (&middot;)) is
bounded from below or above, equations (1.39) and (1.40) hold, and
W (0, x2 ) = 0, x2 ∈ Rn2
β(||x1 ||) ≤ W (x1 , x2 ) , (x1 , x2 ) ∈ D &times; Rn2
V̇ (x1 , x2 ) ≤ −γ(W (x1 , x2 )) , (x1 , x2 ) ∈ D &times; Rn2
54
(1.43)
(1.44)
(1.45)
• asymptotically stable with respect to x1 uniformly in x20 , if
there exist a continuously differentiable function V : D &times; Rn2 7→ R and
class K functions α(&middot;), β(&middot;), γ(&middot;) satisfying (1.40), (1.42), and
V̇ (x1 , x2 ) ≤ −γ(||x1 ||) , (x1 , x2 ) ∈ D &times; Rn2
(1.46)
• globally asymptotically stable with respect to x1 , if D = Rn1
and there exist continuously differentiable functions V : Rn1 &times; Rn2 7→ R
and W : Rn1 &times; Rn2 7→ R, class K functions β(&middot;), γ(&middot;) and a class K∞
function α(&middot;) such that Ẇ (x1 (&middot;), x2 (&middot;)) is bounded from below or above,
and (1.39),(1.40), and (1.43) to (1.45) hold
• globally asymptotically stable with respect to x1 uniformly in
x20 , if D = Rn1 and there exist continuously differentiable functions
V : Rn1 &times; Rn2 7→ R , a class K function γ(&middot;), and class K∞ functions
α(&middot;), β(&middot;) satisfying (1.40), (1.42), and (1.46)
• exponentially stable with respect to x1 uniformly in x20 , if
there exist a continuously differentiable function V : Rn1 &times; Rn2 7→ R
and positive constants α, β, γ, p ≥ 1 satisfying
α||x1 ||p ≤ V (x1 , x2 ) ≤ β||x1 ||p , (x1 , x2 ) ∈ D &times; Rn2
V̇ (x1 , x2 ) ≤ −γ||x1 ||p , (x1 , x2 ) ∈ D &times; Rn2
(1.47)
(1.48)
• globally exponentially stable with respect to x1 uniformly in
x20 , if D = Rn1 and there exist a continuously differentiable function
V : Rn1 &times; Rn2 7→ R and positive constants α, β, γ, p ≥ 1 satisfying
(1.47) and (1.48).
By setting n1 = n and n2 = 0, Theorem (1.7.2) specializes to the case
of nonlinear autonomous systems of the form ẋ1 (t) = f1 (x1 (t)).In this case,
Lyapunov stability with respect to x1 and Lyapunov stability with respect
to x1 uniformly in x20 are equivalent to the classical Lyapunov stability of
nonlinear autonomous systems.
Notation:
The condition V (0, x2 ) = 0, x2 ∈ Rn2 , allows us to prove partial stability in
the sense of Theorem (1.7.1).
55
Example 1.7.1. Consider the nonlinear dynamical systems
h
i
2
(M + m)q̈(t) + me θ̈(t) cos θ(t) − θ̇ (t) sin θ(t) + kq(t) = 0
(1.49)
(I + me2 )θ̈(t) + meq̈(t) cos θ(t) = 0
(1.50)
where t ≥ 0, q(0) = q0 , q̇(0) = q̇0 , θ(0) = θ0 and M, m, k ≥ 0. Let x1 = q,
x2 = q̇, x3 = θ, x4 = θ̇ and consider the Lyapunov function candidate:
V (x1 , x2 , x3 , x4 ) =
1 2
kx1 + (M + m)x22 + (I + me2 )x24 + 2mex2 x4 cos x3
4
which can be written as
1
1
V (x1 , x2 , x3 , x4 ) = kx21 + xT P (x3 )x
2
2
where x = [x2 x4 ]T and
M + m me cos x3
P (x3 ) =
me cos x3 I + me2
Since,
i
p
1h
2
2
2
2
2
2
λmin (P ) =
M + m + I + me − (M + m − I − me ) + 4m e cos x3
2
and
λmax (P ) =
i
p
1h
M + m + I + me2 + (M + m − I − me2 )2 + 4m2 e2 cos2 x3
2
Hence,
1 2 1
1
1
x1 + λmin (P )(x22 + x24 ) ≤ V (x1 , x2 , x3 , x4 ) ≤ x21 + λmax (x22 + x24 )
2
2
2
2
which implies that V (&middot;) satisfies (1.40) and (1.44).
Since,
V̇ (x1 , x2 , x3 , x4 ) = 0
it follows from (1.41) that, (1.49) and (1.50) are Lyapunov stable with respect
to x1 , x2 and x4 uniformly in x30 .
Furthermore, it follows that the zero solution (q(t), q̇(t), (t),˙(t)) = (0, 0, 0, 0)
to (1.49) and (1.50) is unstable in the standard sense but partially Lyapunov
stable with respect to q, q̇, and θ̇.
56
1.7.1
Addressing Partial Stability When Both Initial
Conditions (x10 , x20 ) Lie in a Neighborhood of the
Origin
For this result, we modify Theorem (1.7.2) to reflect the fact that the entire
T
initial state x0 = x10 T x20 T lies in the neighborhood of the origin so that
||x10 || &lt; δ is replaced by ||x0 || &lt; δ.
Theorem 1.7.3. [4] Consider the nonlinear dynamical system (1.37) and
(1.38).Then the following statements hold:
• Lyapunov stable with respect to x1 if, there exist a continuously
differentiable function V : D &times; Rn2 7→ R and a class K function α(&middot;)
such that
V (0, 0) = 0
α(||x1 ||) ≤ V (x1 , x2 ) , (x1 , x2 ) ∈ D &times; Rn2
V̇ (x1 , x2 ) ≤ 0 , (x1 , x2 ) ∈ D &times; Rn2
(1.51)
(1.52)
(1.53)
• Lyapunov stable with respect to x1 uniformly in x20 if, there
exist a continuously differentiable function V : D &times; Rn2 7→ R and class
K functions α(&middot;), β(&middot;) satisfying (1.52),(1.53), and
V (x1 , x2 ) ≤ β(||x||)
T
where x = x1 T x2 T .
, (x1 , x2 ) ∈ D &times; Rn2
(1.54)
• asymptotically stable with respect to x1 if, there exist continuously differentiable functions V : D &times; Rn2 7→ R and W : D &times; Rn2 7→
R and class K functions α(&middot;), β(&middot;), γ(&middot;) such that Ẇ (x1 (&middot;), x2 (&middot;)) is
bounded from below or above, (1.52) holds, and
β(||x||) ≤
W (x1 , x2 ) , (x1 , x2 ) ∈ D &times; Rn2
V̇ (x1 , x2 ) ≤ −γ(W (x1 , x2 )) , (x1 , x2 ) ∈ D &times; Rn2
(1.55)
(1.56)
• asymptotically stable with respect to x1 uniformly in x20 if,
there exist a continuously differentiable function V : D &times; Rn2 7→ R and
class K functions α(&middot;), β(&middot;), γ(&middot;) satisfying (1.52) and (1.54), and
V̇ (x1 , x2 ) ≤ −γ(||x||) , (x1 , x2 ) ∈ D &times; Rn2
57
(1.57)
• globally asymptotically stable with respect to x1 if, D = Rn1
and there exist continuously differentiable functions V : Rn1 &times; Rn2 7→ R
and W : Rn1 &times; Rn2 7→ R, class K functions β(&middot;), γ(&middot;) and a class K∞
function α(&middot;) such that Ẇ (x1 (&middot;), x2 (&middot;)) is bounded from below or above,
and (1.52), (1.55) and (1.56) hold
• globally asymptotically stable with respect to x1 uniformly in
x20 If, D = Rn1 and there exist continuously differentiable functions
V : Rn1 &times; Rn2 7→ R , a class K function γ(&middot;), and class K∞ functions
α(&middot;), β(&middot;) satisfying (1.52), (1.54), and (1.57)
• exponentially stable with respect to x1 uniformly in x20 if,
there exist a continuously differentiable function V : Rn1 &times; Rn2 7→ R
and positive constants α, β, γ, p ≥ 1 satisfying
α||x1 ||p ≤ V (x1 , x2 ) ≤ β||x||p , (x1 , x2 ) ∈ D &times; Rn2
V̇ (x1 , x2 ) ≤ −γ||x||p , (x1 , x2 ) ∈ D &times; Rn2
(1.58)
(1.59)
• globally exponentially stable with respect to x1 uniformly in
x20 if, D = Rn1 and there exist a continuously differentiable function
V : Rn1 &times; Rn2 7→ R and positive constants α, β, γ, p ≥ 1 satisfying
(1.58) and (1.59).
1.8
Nonautonomous Systems
Consider the nonautonomous system
ẋ = f (t, x)
(1.60)
where f : [0, ∞) &times; D 7→ Rn is piecewise continuous in t and locally
Lipschitz in x on [0, ∞) &times; D, and D ⊂ Rn is the domain that contains the
origin x = 0.
The origin is an equilibrium point for (1.60) at t = 0 if,
f (t, x = 0) = 0 , ∀t ≥ 0
An equilibrium at the origin could be a translation of a nonzero equilibrium
point.
To see this, suppose that ȳ(τ ) is a solution of the system
∂y
= g(τ, y)
∂τ
58
defined for all τ ≥ a.The change of variables
x = y − ȳ(τ ) ; t = τ − a
transforms the system into the form
def
ẋ = g(t, y) − ȳ(τ ) = g(t + a, x + ȳ(t + a)) − ȳ(t + a) = f (t, x)
since
˙ + a) = g(t + a, ȳ(t + a)) , t ≥ 0
ȳ(t
the origin is an equilibrium point of the transformed system at t = 0.
Thus, by examining stability behaviour of the origin as an equilibrium point
for the transformed system, we determine the stability behaviour of the solution ȳ(τ ) of the original system.
The stability and the asymptotic stability of the equilibrium point of a nonautonomous system are basically the same as we introduced in Theorem (1.4.1)
for autonomous system.
Note that, while the solution of an autonomous system depends only on
(t − t0 ), the solution of a nonautonomous system may depend on both t
and t0 .
The origin x = 0 is stable equilibrium point for (1.60) if for each &gt; 0 and
for any t0 ≥ 0 there is δ = δ(, t0 ) &gt; 0 such that
kx(t0 )k ≤ δ ⇒ kx(t)k ≤ , ∀ t ≥ t0
where constant δ depends on both the initial time t0 and δ.
Example 1.8.1. The linear first-order system
x
ẋ = −
1+t
has the solution
t
(
x(t) = x(t0 ) e
R
t0
−1
1+τ
dτ )=
1+t0
1+t
Since
||x(t)|| ≤ kx(t0 )k, ∀ t ≥ t0
≤
the origin is stable but according to Theorem (1.4.1), we see that
x(t) 7→ 0 as t 7→ ∞
hence, the origin is asymptotically stable.
59
Example 1.8.2. Consider the system
ẋ(t) = A(t)x + B(t) , x(0) = x0
By variation of parameters technique, we obtain the solution
Zt
Φ(t − τ )B(τ ) dτ
x(t) = Φ(t)x0 +
0
Assume that all the eigenvalues of the matrix A(t) have a negative real parts,
then ||Φ(t)|| ≤ βe−αt for all α, β, t ≥ 0.
Hence,
Zt
||x(t)|| ≤ ||Φ(t)|| ||x0 || +
||Φ(t − τ )|| ||B(τ )|| dτ
0
≤ βe−αt ||x0 || +
Zt
βe−α(t−τ ) ||B(τ )|| dτ
0
Pre-multiply both sides by eαt
||x(t)|| e
αt
Zt
≤ β||x0 || +
βeατ ) ||B(τ )|| dτ
0
Applying Gronwall’s Theorem:
αt
||x(t)|| e
≤ β||x0 || e
≤ β||x0 || e
β
Rt
||B(τ )|| dτ
0
β
∞
R
||B(τ )|| dτ
0
Thus,
−αt
β
∞
R
||B(τ )|| dτ
||x(t)|| ≤ β||x0 || e
e 0
R∞
R∞
We conclude that if, the integral ||B(τ )|| dτ is finite ( ||B(τ )|| dτ &lt; ∞)
0
0
and all eigenvalues of A have negative real parts then,
• All solutions are bounded.Thus,they are stable
• lim ||x(t)|| = 0, since α &gt; 0 (All solutions are asymptotically stable).
t7→∞
60
1.8.1
The Unification between Time-Invariant Stability Theory and Stability Theory for Time-Varying
Systems Through The partial Stability Theory
Partial stability theory provides a unification between time-invariant stability
theory and stability theory for time-varying systems.
Consider the time-varying nonlinear dynamical system
ẋ(t) = f (t, x(t)) where, x(t0 ) = x0 , t ≥ t0
where x(t) ∈ Rn ,t ≥ t0 , and f : [t0 , t1 ) &times; Rn 7→ Rn .
By setting x1 (τ ) ≡ x(t) and x2 (τ ) ≡ t, where τ = t − t0 .Then the solution; that is, x(t), t ≥ t0 , to the nonlinear dynamical system in above can
be equivalently characterized by the solution x1 (τ ), τ ≥ 0, to the nonlinear
autonomous dynamical system
ẋ1 (τ ) = f ((x2 (τ ), x1 (τ ))
ẋ2 (τ ) = 1 , x2 (0) = t0
x1 (0) = x0 , τ ≥ 0
(1.61)
(1.62)
where ẋ1 (&middot;) and ẋ2 (&middot;) denote differentiation with respect to τ .
Example 1.8.3. Consider the linear time-varying dynamical system
f (t, x) = A(t)x
where A : [0, ∞) &times; Rn&times;n is continuous.So we given
ẋ(t) = A(t)x(t), x(t0 ) = x0 , t ≥ t0
Next, let n1 = n , n2 = 1, x1 (τ ) = x(t) , x2 (τ ) = t ,f1 (x1 (t), x2 (t)) =
A(t)x(t), f2 (x1 (t), x2 (t)) = 1.
Hence, the solution x(t), t ≥ t0 to the nonlinear dynamical system in above
can be equivalently characterized by the solution x1 (τ ), τ ≥ 0 to the nonlinear
autonomous dynamical system
ẋ1 (τ ) = A(x2 (τ )) x1 (τ ) , x1 (0) = x0 , τ ≥ 0
ẋ2 (τ ) = 1 , x2 (0) = t0
where ẋ1 (&middot;) and ẋ2 (&middot;) denote differentiation with respect to τ .
61
(1.63)
(1.64)
Example 1.8.4. Consider the spring-mass-damper system with time-varying
damping coefficient
q̈(t) + c(t)q̇(t) + kq(t) = 0 , q(0) = q0 , q̇(0) = q̇0 , t ≥ 0
Assume that c(t) = 3 + sin t and k &gt; 1.
By using Theorem (1.7.2), setting z1 = q and z2 = q̇, the dynamical system
can be equivalently written as
ż1 (t) = z2 (t) , z1 (0) = q0 , t ≥ 0
(1.65)
ż2 (t) = −kz1 (t) − c(t)z2 (t) , z2 (0)q̇0
(1.66)
T
Let n1 = 2, n2 = 1, x1 = [z1 z2 ]T , x2 = t, f1 (x1 , x2 ) = xT1 v −x1 T h(x2 )
and f2 (x1 , x2 ) = 1, where h(x2 ) = [k c(x2 )]T and v = [0 1]T .
Now, the solution (z1 (t), z2 (t)), t ≥ 0, to the nonlinear time-varying dynamical system (1.65) and (1.66), is equivalently characterized by the solution
x1 (τ ), τ ≥ 0, to the nonlinear autonomous dynamical system :
ẋ1 (τ ) = f1 (x1 (τ ), x2 (τ ))
ẋ2 (τ ) = 1 , x2 (0) = 0
, x1 (0) = [q0 q̇0 ]T , τ ≥ 0
(1.67)
(1.68)
where ẋ1 (τ ) and ẋ2 (τ ) denote differentiation with respect to τ .
To observe the stability of this system, assume the Lyapunov function candidate
k + 3 + sin(x2 ) 1
T
V (x1 , x2 ) = x1 P (x2 )x1 , where P (x2 ) =
.
1
1
Since ,
x1 T P1 x1 ≤ V (x1 , x2 ) ≤ x1 T P2 x1
where
k+2 1
P1 =
1
1
k+4 1
P2 =
1
1
, (x1 , x2 ) ∈ R2 &times; R.
it follows that V (x1 , x2 ) satisfies (1.47) with D = R2 and p = 2.
Next,since
V̇ (x1 , x2 ) = −2x1 T [R + R1 (x2 )] x1
≤ −2x1 T Rx1
≤ − min {k − 1, 1}||x1 ||22
62
where
k−1 0
R=
&gt;0
0
1
and
0
1 − 12 cos(x2 )
R1 (x2 ) =
0
1 + sin(x2 )
it follows from Theorem (1.7.2) that the dynamical system is globally exponentially stable with respect to x1 uniformly in x20 .
1.8.2
Rephrase The Partial Stability Theory for Nonlinear Time-Varying Systems
Consider the nonlinear time-varying dynamical system
ẋ = f (t, x(t)) where x(t0 ) = x0 , t ≥ t0
(1.69)
where x(t) ∈ D, D ∈ Rn such that 0 ∈ D. Also, f : [t0 , t1 ) &times; D 7→ Rn such
that f (&middot;, &middot;) is continuous in t and x, and ∀ t ∈ [t0 , t1 ), f (t, 0) = 0 and f (t, &middot;)
is locally Lipschitz in x uniformly in t for all t in compact subsets of [0, ∞) .
Theorem 1.8.1. [4] The nonlinear time-varying dynamical system (1.69) is:
• Stable if, ∀ &gt; 0 and t0 ∈ [0, ∞), ∃ δ = δ(, t0 ) &gt; 0 such that ||x0 || &lt;
δ implies that ||x(t)|| &lt; , ∀ t ≥ t0 .
• Uniformly stable if, ∀ &gt; 0, ∃ δ = δ() &gt; 0 such that ||x0 || &lt; δ
implies that ||x(t)|| &lt; ∀ t ≥ t0 and t0 ∈ [0, ∞)
• Asymptotically stable if, it is stable and ∀ t0 ∈ [0, ∞), ∃ δ =
δ(t0 ) &gt; 0 such that ||x0 || &lt; δ implies that lim x(t) = 0.
t7→∞
• Uniformly asymptotically stable if, it is uniformly stable and ∃ δ &gt;
0 such that ||x0 || &lt; δ implies that lim x(t) = 0 uniformly in x0 and
t7→∞
uniformly in t0 ; that is, ∀ &gt; 0 , ∃ T = T () &gt; 0 such that
||x(t)|| &lt; , ∀ t ≥ t0 + T (), and ∀ ||x(t0 )|| &lt; δ , ∀ t0 ∈ [0, ∈)
• Globally asymptotically stable if, it is Lyapunov stable and lim x(t) =
t7→∞
0 , ∀ x0 ∈ Rn and t0 ∈ [0, ∞).
63
• Globally uniformly asymptotically stable if, it is uniformly stable
and lim x(t) = 0 uniformly in x0 , ∀ x0 Rn and uniformly in t0 ;that is,
t7→∞
∀ &gt; 0 , δ &gt; 0 , ∃ T = T (, δ) &gt; 0 such that
||x(t)|| &lt; , ∀ t ≥ t0 + T (, δ), and ∀ ||x(t0 )|| &lt; δ , t0 ∈ [0, ∞)
.
• Exponentially stable if, ∃ α, β, δ &gt; 0 such that ||x0 || &lt; δ implies
that ||x(t)|| ≤ α||x0 || e−βt , ∀ t ≥ t0 and t0 ∈ [0, ∞).
• Globally (uniformly) exponentially stable if, ∃ α, β &gt; 0 such
that ||x(t)|| ≤ α||x0 k|; e−βt , t ≥ t0 , ∀ x0 ∈ Rn and t0 ∈ [0, ∞).
Example 1.8.5. The linear first order system
ẋ = (6t sin t − 2t)x
has the solution :
Rt
(6τ sin τ −2τ ) dτ
x(t) = x(t0 ) e t0
2 −6 sin t +6t cos t +t2 )
0
0
0
0
= x(t0 ) e (6 sin t−6t cos t−t
(1.70)
Thus, for any fixed t0 and for all t ≥ t0 , the term −t2 in (1.70) will eventually
dominates.Thus, the exponential term is bounded by a constant c(t0 ) that is
depends on t0 .
Hence,
|x(t)| &lt; |x(t0 )| c(t0 ) , ∀t ≥ t0
So, for any &gt; 0 ∃ δ = c(t0 ) such that |x0 | &lt; δ implies |x(t)| &lt; , ∀ t ≥
t0 .Hence, the origin is stable.
Uniform stability and uniformly asymptotic stability, can be characterized in terms of a special scalar functions,known as class K and class
KL.
Definition 1.8.1. A continuous function α : [0, ∞) 7→ [0, ∞) is said to
belong to class K if, it is
• strictly increasing
64
• α(0) = 0
and is said to belong to class K∞ if,
• α : [0, ∞) 7→ [0, ∞).
• α(r) 7→ ∞ as r 7→ ∞.
Definition 1.8.2. [4] A continuous function β(r, s) : [0, a)&times;[0, ∞) 7→ [0, ∞)
is said to belong to class KL if,
• For each fixed s, the mapping β(r, s) belongs to class K with respect to
r
• For each fixed r, the mapping β(r, s) is decreasing with respect to s
• β(r, s) 7→ 0 as s 7→ ∞
Example 1.8.6.
0
1
• α(r) = tan−1 r is strictly increasing where α (r) = 1+r
2 &gt; 0.It is belong
to class K but not to class K∞ since lim α(r) = π2 &lt; ∞.
r7→∞
• α(r) = rc , for any positive real number c, is strictly increasing since
0
α = crc−1 &gt; 0.In addition, lim α(r) = ∞; thus, it is belong to class
r7→∞
K∞ .
r
• β(r, s) = ksr+1
, for any positive real number k, is strictly increasing
∂β
∂β
1
in r since ∂r = (ksr+1)
2 &gt; 0, and strictly decreasing in s since ∂s =
−kr2
(ksr+1)2
&lt; 0.In addition, β(r, s) 7→ 0 as s 7→ ∞.Hence, it is belong to
class KL.
• β(r, s) = rc e −s , for any positive real number c, belongs to class KL.
Theorem 1.8.2. [4] Consider the time-varying dynamical system given by
1.69.Then the following statements hold:
• The system is stable if, there is a continuously differentiable function
V : [0, ∞) &times; D 7→ R and a class K f unction α(&middot;) such that
V (t, 0) = 0 , t ∈ [0, ∞)
α(||x||) ≤ V (t, x) , (t, x) ∈ [0, ∞) &times; D
V̇ (t, x) ≤ 0 , (t, x) ∈ [0, ∞) &times; D
65
(1.71)
(1.72)
(1.73)
• The system is uniformly stable if, there is a continuously differentiable function V : [0, ∞) &times; D 7→ R and a class K f unctions α(&middot;), β(&middot;)
such that
V (t, x) ≤ β(||x||); , (t, x) ∈ [0, ∞) &times; D
(1.74)
and inequalities 1.72 and 1.73 are satisfied
• The system is asymptotically stable If, there exist continuously differentiable functions V : [0, ∞) &times; D 7→ R and W : [0, ∞) &times; D 7→ R and
a class K f unctions α(&middot;), β(&middot;) and γ(&middot;) such that
(A) Ẇ (&middot;, x(&middot;)) is bounded from below or above
(B) Conditions 1.71 and 1.72 hold
(C)
W (t, 0) = 0 , t ∈ [0, ∞) &times; D
β(||x||) ≤ W (t, x) , (t, x) ∈ [0, ∞)
V̇ ≤ −γ (W (t, x)) , (t, x) ∈ [0, ∞) &times; D
(1.75)
(1.76)
(1.77)
• The system is uniformly asymptotically stable If, there exist a
continuously differentiable function V : [0, ∞) &times; D 7→ R and a class
K f unctions α(&middot;), β(&middot;) and γ(&middot;) satisfying 1.72, 1.74, and
V̇ (t, x) ≤ −γ(||x||) .(t, x) ∈ [0, ∞) &times; D
(1.78)
• The system is globally asymptotically stable if, D = Rn and there
exist continuously differentiable functions V : [0, ∞) &times; D 7→ R and
W : [0, ∞) &times; D 7→ R, a class K f unctions β(&middot;), γ(&middot;) and a class
K∞ f unction α(&middot;) such that
(A) Ẇ (&middot;, x(&middot;)) is bounded from below or above
(B) Conditions 1.71,1.72 and 1.75 to 1.77 hold
• The system is globally uniformly asymptotically stable if, D =
Rn and there exist a continuously differentiable function V : [0, ∞) &times;
D 7→ R, a class K f unction γ(&middot;) and a class K∞ f unctions α(&middot;), β(&middot;)
satisfying 1.72, 1.74, and 1.78
66
• The system is (uniformly) exponentially stable,if there exist a
continuously differentiable function V : [0, ∞) &times; D 7→ R and positive
constants α, β, γ, p such that p ≥ 1 and
α||x||p ≤ V (t, x) ≤ β||x||p , (t, x) ∈ [0, ∞) &times; D
V̇ (t, x) ≤ −γ||x||p , (t, x) ∈ [0, ∞) &times; D
(1.79)
(1.80)
• The system is globally (uniformly) exponentially stable if, D =
Rn and there exist a continuously differentiable function V : [0, ∞) &times;
D 7→ R and positive constants α, β, γ, p such that p ≥ 1 and satisfying
1.79 and 1.80.
Proof. By let n1 = n and n2 = 1,
Setting x1 (τ ) = x(t) and x2 (τ ) = t, where τ = t − t0 ≥ 0,
Let f1 (x1 , x2 ) = f (t, x) = f (x2 , x1 ),
Let f2 (x1 , x2 ) = 1.
The solution x(t), t ≥ t0 , to the nonlinear time-varying dynamical system
(1.69) is equivalently characterized by the solution x1 (τ ), τ ≥ 0, to the nonlinear autonomous dynamical system
ẋ1 = f1 (x1 (τ ), x2 (τ )) , x1 (0) = x0 , τ ≥ 0,
ẋ2 = 1
, x2 (0) = t0
(1.81)
(1.82)
where ẋ1 (&middot;) and ẋ2 (&middot;) denote differentiation with respect to τ .Note that since
f (t, 0) = 0, t ≥ 0, it follows that f1 (x2 , 0) = 0, ∀x2 ∈ Rn2 .Definitely, the
result is a direct consequence of Theorem (1.7.2).
Lemma 1.8.1. The equilibrium point x = 0 of equation (1.69) is
• Uniformly stable if and only if there exists a class K function α(&middot;)
and a positive constant δ, independent of t0 , such that
kx(t)k ≤ α(kx(t0 )k , ∀t ≥ t0 ≥ 0, ∀kx(t0 )k &lt; δ
(1.83)
• Uniformly asymptotically stable if and only if there exist a class
KL function β(r, s) and a positive constant δ, independent of t0 , such
that
kx(t)k ≤ β(kx(t0 )k, t − t0 ) , ∀ kx(t0 )k &lt; δ
(1.84)
67
• Globally uniformly asymptotically stable if and only if inequality
(1.84) is satisfied for any initial state x(t0 ).
Remark:
A special case of uniform asymptotic stability arises when the class KL function β in (1.84) takes the form
β(r, s) = kr e −γs
This case is very important and will be designated as a distinct stability
property of equilibrium points.
Theorem 1.8.3. The equilibrium point x = 0 of (1.69) is exponentially
stable if, inequality (1.84) is satisfied with
β(r, s) = kr e−γs , k &gt; 0 , γ &gt; 0
and is globally exponentially stable if this condition is satisfied for any
initial state.
So, to establishing uniform asymptotic stability of the origin, we need to
verify inequality (1.84).
Lemma 1.8.2. Let V (x) : D 7→ R be a continuous positive definite function
defined on a domain D ⊂ Rn that contain the origin.Let Br ⊂ D for some
r &gt; 0.Then, there exist class K functions α1 , α2 , defined on [0, r] , such that
α1 (kxk) ≤ V (x) ≤ α2 (kxk)
∀
||x|| ≤ r
where
x ∈ Br
Moreover, if D = Rn and V (x) is radially unbounded then α1 and α2
can be chosen to belong to class K∞ and the foregoing inequality holds for all
x ∈ Rn .
Remark:
For the quadratic positive definite function V (x) = xT P x,
we figure out that inequality of lemma (1.8.2) becomes
λmin (P )||x||22 ≤ xT P x ≤ λmax (P )||x||22
68
Theorem 1.8.4. [4] Let x = 0 be an equilibrium point for (1.69) and D ⊂ Rn
be a domain containing x = 0. Let V : [0, ∞) &times; D 7→ R be a continuously
differentiable function such that
W1 (x) ≤ V (t, x) ≤ W2 (x)
(1.85)
∂V
∂V
+
f (t, x) ≤ −W3 (x)
∂t
∂x
(1.86)
∀ t ≥ 0, ∀x ∈ D where W1 (x), W2 (x) and W3 (x) are continuously positive
definite functions on D.Then, x = 0 is uniformly asymptotically stable.
A function V (t, x) satisfying the left inequality of (1.85) is said to be
positive definite . A function satisfying the right inequality of (1.85) is
said to be decrescent .
A function V (t, x) is said to be negative definite if, −V (t, x) is positive
definite.
Therefore, Theorem (1.8.4) states that the origin is uniformly asymptotically stable if there is a continuously differentiable, positive definite , decrescent function V (t, x) whose derivative along the trajectories of the system is
negative definite.In this case, V (t, x) is called a Lyapunov function
Corollary 1.8.1. Suppose that all the assumptions of Theorem (1.8.4) are
satisfied globally (∀ x ∈ Rn ) and W1 (x) is radially unbounded .Then, x = 0
is globally uniformly asymptotically stable.
Example 1.8.7. Consider the system
x˙1 = −x − g(x)x2
x˙2 = x1 − x2
where g(t) is continuously differentiable and satisfies
0 ≤ g(t) ≤ k
, ġ(t) ≤ g(t) , ∀t ≥ 0
taking V (t, x) = x21 + [1 + g(t)]x22 as a Lyapunov function candidate. Also,
we see that
x21 + x22 ≤ V (t, x) ≤ x21 + (1 + k)x22 , ∀x ∈ R2
69
Hence, V (t, x) is positive definite ,decrescent ,and radially unbounded .
The derivative of V (t, x) along the trajectories of the system is given by
V̇ (t, x) = −2x21 + 2x1 x2 − [2 + 2g(t) − ġ(t)]x22
Using the inequality
2 + 2g(t) − ġ(t) ≥ 2 + 2g(t) − g(t) ≥ 2
we obtain,
V̇ (t, x) ≤
−2x21
+ 2x1 x2 −
2x22
T x1
2 −1 x1 def
=−
= −xT P x
x2
−1 2
x2
where V (x) is positive definite, hence, V̇ (t, x) is negative definite. Thus,
all the assumptions of Theorem (1.8.4) are satisfied globally with quadratic
positive definite functions W1 (x), W2 (x) and W3 (x).So the origin is globally
exponentially stable.
Example 1.8.8. The linear time-varying system
ẋ = A(t) x(t)
(1.87)
has an equilibrium point at x = 0.
Let A(t) : [0, ∞) 7→ Rn&times;n be continuous ∀ t ≥ 0
To examine the stability, consider a Lyapunov function candidate
V (t, x) = xT P (t)x
where P (t) : [0, ∞) 7→ Rn&times;n is a continuously differentiable , symmetric,
bounded, positive definite matrix;that is, P (t) &gt; 0 , ∀ t ≥ t0 ),
0 ≤ αI ≤ P (t) ≤ βI , ∀ t ≥ 0
(1.88)
Also, we need to satisfy the matrix differential equation
− Ṗ (t) = P (t)A(t) + AT (t)P (t) + Q(t)
where Q(&middot;) is continuous such that
Q(t) ≥ γI , ∀t ≥ t0
70
(1.89)
Hence, the function V (t, x) is positive definite,decrescent and radially unbounded since
c1 ||x||p=2 2 ≤ V (t, x) ≤ c2 ||x||p=2 2
The derivative of V (t, x) along the trajectories of the system (1.87) is given
by
V̇ (t, x) =xT Ṗ (t) x + xT P (t) ẋ + ẋT P (t) x
=xT [Ṗ (t) + P (t)A(t) + AT (t)P (t)]x = −xT Q x ≤ −γ||x||22
Hence, V̇ (t, x) is negative definite and all assumptions of Theorem 1.8.2 are
satisfied globally with p = 2.Therefore, the origin is globally exponentially
stable.
Hence, a sufficient condition for global exponential stability for a linear timevarying system is the existence of a continuously differentiable, bounded and
positive-definite matrix function P : [0, ∞) 7→ Rn&times;n satisfying 1.89.
Example 1.8.9. Consider the linear time-varying dynamical system
ẋ1 = −x1 − e−t x2 , x1 (0) = x10 , t ∀t ≥ 0
ẋ2 = x1 − x2 , x2 (0) = x20
(1.90)
(1.91)
To examine stability of this system, consider the Lyapunov function candidate
V (t, x) = x21 + (1 + e−t )x22
Since,
x21 + x22 ≤ V (t, x) ≤ x21 + 2x22 , (x1 , x2 ) ∈ R &times; R , t ≥ 0
it follows that V (t, x) is positive definite, radially unbounded and satisfies
1.72 and 1.74.Since
V̇ (t, x) =
≤
=
≤
−2 x21 + 2x1 x2 − 2x22 3e−t x22
−2x21 + 2x1 x2 − 2x22
−xT Rx
−λ min(R)||x||p=2 2
where
R=
(1.92)
(1.93)
(1.94)
(1.95)
2 −1
&gt;0
−1 2
and x = [x1 x2 ]T , it follows from Theorem 1.8.2 with p = 2 that the zero
solution (x1 (t), x2 (t)) ≡ (0, 0) to the dynamical system is globally exponentially stable.
71
1.9
Stability Theorems for Linear Time-Varying
Systems and Linearization
Consider the linear time-varying system
ẋ(t) = A(t)x
(1.96)
We know from linear system theory that the solution of (1.96) is given by
x(t) = x(t0 ) Φ(t, t0 )
where Φ(t, t0 ) is called the state transition matrix.
Theorem 1.9.1. [1] The equilibrium point x = 0 of (1.96) is (globally uniformly asymptotically stable if and only if the state transition matrix satisfies
the inequality
kΦ(t, t0 )k ≤ k e −γ(t−t0 ) , ∀t ≥ t0 ≥ 0
(1.97)
for some positive constants k and γ
Theorem (1.9.1) shows that, for linear systems, uniformly asymptotic
stability of the origin is equivalent to exponential stability.
We have seen in Example (1.8.8) that, if we can find a positive definite,
bounded matrix P (t) which satisfies the differential equation (1.89) for some
positive definite Q(t) then
V (t, x) = xT P x
is a Lyapunov function candidate for the system.
if the matrix Q(t) is chosen to be bounded in addition to being positive
definite; that is,
0 ≤ c3 I ≤ Q(t) ≤ c4 I , ∀t ≥ 0
and if A(t) is continuous and bounded, then it can be shown that when the
origin is uniformly asymptotically stable, there is a solution of (1.89) that
possesses the desired properties.
Theorem 1.9.2. Let x = 0 be the uniformly asymptotically stable equilibrium
of (1.96).Suppose A(t) is continuous and bounded.Let Q(t) be a continuous,
bounded, positive definite, symmetric matrix.Then, there is a continuously
differentiable, bounded, positive definite, symmetric matrix P (t) which satisfies (1.89).Hence, V (t, x) = xT P x is a Lyapunov function for the system
that satisfies the conditions of Theorem (1.8.4).
72
Example 1.9.1. Consider the second order linear system
−1 + 1.5 cos2 t
1 − 1.5 sin t cos t
A(t) =
−1 − 1.5 sin t cos t −1 + 1.5 sin2 t
√
the eigenvalues of A(t) are given by −0.25 &plusmn; i0.25 7 which they independent on t and lie on the open left-half plane. Although,the system should
be uniformly asymptotically stable but te system can not be characterized by
the location of the eigenvalues of the matrix A.To verified that the origin is
unstable,
0.5t
e
cos t e−t sin t
Φ(t, 0) =
(1.98)
−e0.5t sin t e−t cos t
shows that there are arbitrary initial states that make the solution unbounded
and escapes to infinity.
Corollary 1.9.1. It is important to note that it is not always necessary to
construct a time-varying Lyapunov function to show stability for a nonlinear
time-varying dynamical system
Example 1.9.2. consider the nonlinear time varying dynamical system
ẋ1 (t) = −x31 (t) + (sin ωt)x2 (t) , x1 (0) = x10 , t ≥ 0
x˙2 = −(sin ωt)x1 (t) − x32 (t) , x2 (0) = x20
(1.99)
(1.100)
To show that the origin is globally uniformly asymptotically stable, consider the time-invariant Lyapunov function candidate
1
V (x) == (x21 + x22 )
2
Clearly,V (x), x ∈ R2 , is positive definite and radially unbounded. Furthermore,
V̇ (x) = x1 [x31 + (sin ωt)x2 ] + x2 [(sint)x1 − x32 ]
=
x41 x42
&lt; 0 , (x1 , x2 ) ∈ R &times; R, (x1 , x2 ) 6= (0, 0)
(1.101)
(1.102)
(1.103)
which shows that the zero solution (x1 (t), x2 (t)) ≡ (0, 0) to the above dynamical system is globally uniformly asymptotically stable.
73
1.9.1
Applying Lyapunov Indirect Method on Stabilizing Nonlinear Controlled Dynamical System
Consider the nonlinear controlled system
ẋ(t) = f (x(t), u(t))
, x(t = 0) = x0 , t ≥ 0
(1.104)
where F : Rn &times; Rm 7→ Rn , u(t) is the feedback controller.We seek feedback
controllers of the form u(t) = φ (x(t)), where φ : Rn 7→ Rm , φ(0) = 0 to let
that the zero solution x(t) ≡ 0 of the closed-loop system given by
ẋ = F (x, u) , x(0) = x0 , t ≥ 0
(1.105)
is asymptotically stable.
Theorem 1.9.3. [1] Consider the nonlinear controlled system (1.105) where
F : Rn &times;Rm 7→ Rn is continuously differentiable function and F (0, 0) = 0.Let
define
∂F (x, u)
∂F (x, u)
,B =
A=
∂u
∂x
(x,u)=(0,0)
(x,u)=(0,0)
assume that the matrices A, B are stabilizable.Then there exists a matrix
K ∈ Rm&times;n that can be chosen so that A + BK is asymptotically stable.It
follow that if the linearization of (1.104) is stabilizable; that is, A, B are
stabilizable, then if for every Q &gt; 0 there exists a positive-definite matrix
P ∈ Rn&times;n satisfying Lyapunov equation
(A + BK)T P + P (A + BK) = −Q
then we can construct a Lyapunov function for the nonlinear closed-loop system (1.105).Hence, the quadratic function V (x) = xT P x is a Lyapunov function that guarantees local asymptotic stability .
Proof. The closed-loop system
ẋ(t) = F (x(t), K(t)) , x(0) = 0 , t ≥ 0
has the form
ẋ(t) = F (x(t), K(t)) = f (x(t))
74
, x(0) = 0 , t ≥ 0
It follows that,
∂f
∂x
x=0
∂F
∂F
=
(x, Kx) +
(x, Kx)K
∂x
∂u
= A + BK
x=0
Since (A, B) is stabilizable, K can be chosen such that A + BK is asymptotically stable.
75
Chapter 2
Stability of Switched Systems
2.1
Introduction[2]
A switched system is a dynamical system in which switching plays a nontrivial rule. The system consists of continuous states that take values from
a vector space and discrete states that take values from a discrete index set.
The interaction between the continuous and discrete states makes switched
systems described by
ẋ(t) = fσ(t) (x(t)) where f : Rn 7→ Rn , σ(t) ∈ P = {1, 2, &middot; &middot; &middot; }
2.2
Classes of Hybrid and Switched Systems
Hybrid systems are an interaction between continuous and discrete dynamics that are form a complete dynamical systems.
Many researchers in control theory regard hybrid systems as continuous
systems with switching .
Consider the motion of an automobile that might takes the form
ẋ1 = x2
ẋ2 = f (a, q)
where x1 is the position, x2 is the velocity, a ≥ 0 is the acceleration input
and q ∈ {1, 2, 3, 4, 5, −1, 0} is the gear shift position. Here x1 and x2 are the
continuous states and q is the discrete state. In case of a manual transmission,
the discrete transitions affect the continuous trajectory. For example, when
76
q = −1, the function f (a, q) should be decreasing in a (acceleration) and viceversa. In case of an automatic transmission, the evaluation of the continuous
state x2 is in turn used to determine the discrete transitions.
A continuous-time systems with isolated discrete switching events refer to
be called switched systems.
Switching events in switched systems can be classified into
• State-Dependent versus Time-Dependent switching.
• Autonomous (Uncontrolled) switching versus Controlled switching.
2.2.1
State-Dependent Switching
In this situation, where the space of the continuous state is separated by a
family of switching surfaces. So, a finite or infinite number of operating
regions are constructed. In each of this regions, a continuous-time dynamical
system is governing. Whenever the system trajectory hits a switching surface,
the state jumps instantaneously to a new state with its related dynamical
system as depicted in Figure 2.1.
Figure 2.1: State-dependent switching, the thick curves denote the switching
surface and the thin curves with arrows denote the continuous portions of
the trajectory
2.2.2
Time-Dependent Switching
Suppose that we have a family of systems
ẋ = fi (x) , i ∈ P
77
(2.1)
Assume fi : Rn 7→ Rn , i ∈ P, the functions fi are assumed to be at least
locally Lipschitz and P is a finite index set defined as: P = {1, 2, &middot; &middot; &middot; , m}.
To define a switched system generated by the above family, we need to declare
for what is called a switching signal. It is a constant function σ : [0, ∞) 7→
P that can generate a finite number to specify which active subsystem is
currently being followed from the family (2.1) on every interval between two
consecutive switching times as depicted in Figure 2.2.
Hence, the switched system with time-dependent switching can be described
by
ẋ(t) = fσ(t) (x(t))
or for shortly,
ẋ = fσ (x)
(2.2)
and in case of a switched linear system:
ẋ = Aσ x.
(2.3)
Figure 2.2: Switching signal σ(t) : [0, ∞) 7→ S
2.2.3
Stability to Non-Stability Switched Systems and
Vice-Versa
Consider the following switched system
ẋ = fσ(t) x(t),
σ(t) := p ∀ p ∈ P = {1, 2}
assume that the two subsystems are asymptotically stable, with trajectories
as depicted in Figure 2.3.
78
Figure 2.3: the two individual subsystems are asymptotically stable
If we applied a switching signal σ(t), then the switched system might
be asymptotically stable as shown on the left in Figure 2.4 or no longer
asymptotically stable (unstable) as shown on the right in Figure 2.4.
Figure 2.4: Switching between stable subsystems
Similarly, in case of both subsystems are unstable, the switched system
may be either asymptotically stable or unstable depending on the strategy
of the switching signal as shown in Figure 2.5.
Figure 2.5: Switching between unstable subsystems
79
Hence, we conclude that unconstrained switching signal forces the trajectories of such a system to escape to infinity in finite time. Thus, destabilize
the switched system even if all the subsystems are stable.
2.2.4
Autonomous Switching Versus Controlled Switching
Autonomous switching is a situation where we have no direct control over
the switching mechanism as well as systems with state-dependent switching
in which the rule that defines the switching signal are predetermined throughout the switching surfaces.
In contrast, Controlled switching is imposed by the designer where a direct control is applied over the switching mechanism in order to achieve a
desired behaviour over the system. For example, if we given a system controlled through autonomous switching and suddenly the process encounter
some failures, then it may be necessary to design a logic-based mechanisms for detecting and correcting the fault occurred.
Definition 2.2.1. [2] Switched system with controlled time-dependent switching can be described by recasting the switched system (2.2) into
ẋ =
m
X
fi (x)ui
i=1
where i ∈ P = {1, 2, . . . , m} and the controls are defined as
uk = 1
ui =
ui = 0
∀ i 6= k
In case of the switched linear system,
ẋ =
m
X
Ai x ui
i=1
2.3
Sliding and Hysteresis Switching
Consider a switched system
ẋ = fi (x)
80
with state-dependent switching described by a single switching surface S and
i ∈ P = {1, 2}. If we assume that the continuous trajectory hits the surface
S, it crosses over to the other side. This will indeed be true if,
• The trajectory hits the surface at the corresponding point x ∈ S.
• Both the vectors f1 (x) , f2 (x) point in the same direction as depicted
in Figure 2.6.
Thus, a solution is then obtained depending on which side of S the trajectory
lies on.
Figure 2.6: Crossing a switching surface
Definition 2.3.1. Sliding Mode is obtained once the trajectory hits the
surface S then it cannot leave the surface because both vectors f1 (x), f2 (x)
pointing toward S as depicted in Figure 2.7. Therefore, the only possible
solution of the switched system is to slide on the surface S in the sense of
Filippov.
81
Figure 2.7: a sliding mode
Definition 2.3.2 (Filippov’s Definition:). [2]
According to Filippov concept, the solution x(&middot;) of the switched system is
obtained by including all the convex combinations of the vectors f1 (x), f2 (x)
such that
ẋ ∈ F (x),
∀x ∈ S
where F (x) defined as:
F (x) := {αf1 (x) + (1 − α)f2 (x) : α ∈ [0, 1]}
If x ∈
/ S, we simply set F (x) = f1 (x) or F (x) = f2 (x) depending on which
side of the surface S the point x lies on.
Note that in Figure 2.7, the tangent to the surface S at the point x is a
unique convex combination of the two vectors f1 (x) and f2 (x). Sometimes,
sliding mode can be interpreted as a fast switching, or chattering .
Usually, this phenomenon is undesirable in mathematical models of real systems, because in electronics for example, it leads to low control accuracy and
high heat losses in power circuits. Avoiding chattering requiring to maintain
the property that two consecutive switching events are always separated by
a time interval of positive length[2]. Also, fast switching can be avoided with
help of what is called hysteresis switching strategy .
Definition 2.3.3 (Hysteresis Switching).
This strategy is constructed by generate a piecewise constant signal σ which
is change its value depending on the current value of x and the previous value
of σ.
82
Hysteresis switching strategy can be formalized by introducing a
discrete state σ which is described as follows. First, we have to construct two
regions Ω1 and Ω2 by offsetting the original switching surface. Thus a newly
switching surfaces S1 and S2 are obtained as shown in Figure 2.8.
Figure 2.8: Hysteresis switching regions
Consider a switched system
ẋ = fi (x) ,
i ∈ P = {1, 2}
Assume that the subsystem ẋ = f1 (x) ∈ Ω1 and ẋ = f2 (x) ∈ Ω2 . Thus
switching events occur when the trajectory hits one of the switching surfaces
Ω1 or Ω2 .
Second, at the initial state let σ(0) = 1 if x(0) ∈ Ω1 and σ(0) = 2 if x(0) ∈
Ω2 .
The strategy now is to declare a switching signal σ(t) to be defined as

σ(t) = 1 if x(t) ∈ Ω1

σ(t) = 2 if x(t) ∈ Ω2
On the other hand, if σ(t) = 1 but x(t) ∈
/ Ω1 , then let σ(t) = 2 and
vice-versa. Following this procedure, the generated switching signal σ can
avoiding the chattering with a typical solution trajectory as shown in Figure
2.9.
83
Figure 2.9: Hysteresis with a typical trajectory
2.4
Stability Under Arbitrary Switching
Definition 2.4.1. [2] Consider the switched system
ẋ = fσ(t) (x)
(2.4)
• If there exist a positive constant δ and a class KL function β such that
for all switching signals σ, the solutions of (2.4) satisfy
|x(0)| ≤ δ ⇒ |x(t)| ≤ β(|x(0)|, t) ∀t ≥ 0
(2.5)
then the switched system is uniformly asymptotically stable at the
origin.
• If the inequality (2.5) is valid for all switching signals and all initial
conditions,we obtain global uniform asymptotic stability (GUAS).
• If β takes the form β(r, s) = k r e−λ s for some k, λ &gt; 0, and satisfy the
inequality
|x(t)| ≤ k|x(0)|e−λ t ∀t ≥ 0
(2.6)
then the system (2.4) is uniformly exponentially stable.
Moreover,if the inequality (2.6) is valid for all switching signals and all
initial conditions,we obtain global uniform exponential stability (GUES).
84
Remark:
The term uniform describes the uniformly with respect to switching signals.
In general, if we have V as a Lyapunov function candidate and the rate of
decrease of V along the solutions is not affected by switching, then we say
that the switched system is uniformly with respect to σ(t)
2.4.1
Commuting Linear Systems
Definition 2.4.2. [2] Suppose we have two matrices A1 and A2 then we
always say that the matrices A1 and A2 are commute if and only if
A1 A2 = A2 A1
and we often write this condition as
[A1 , A2 ] = 0
where the commutator or Lie bracket [&middot;, &middot;], is defined as
[A1 , A2 ] := A1 A2 − A2 A1
(2.7)
Moreover, we have
e A1 e A2 = e A2 e A1
eA1 eA2 = eA1 +A2
2.4.2
(2.8)
(2.9)
Common Lyapunov Function (CLF)
Consider the switched system
ẋ(t) = Ai (t)x(t) , x(t0 ) = x0 , i = 1, 2, &middot; &middot; &middot; , n
(2.10)
where x(t) ∈ Rn and the matrix Ai (t) switches between stable matrices
A1 , A2 , &middot; &middot; &middot; , An .
If the switched system (2.10) share a common quadratic Lyapunov function
of the form V (x) = xT P x with a negative definite time derivative along the
trajectories of the system (2.10) (i.e, V̇ (x) &lt; 0), then the system (2.10) is
exponentially stable for any arbitrary switching sequence.
85
Theorem 2.4.1. [13] Consider the switching system (2.10) with Ai = {A1 , A2 }.
Assume that
• A1 , A2 are asymptotically stable matrices.
• A1 , A2 are commute.
Then
(1) The system is exponentially stable under any arbitrary switching sequence between the elements of Ai .
(2) For a given symmetric positive definite matrix P0 ,let P1 , P2 be the unique
symmetric positive definite solutions to the Lyapunov equations
AT1 P1 + P1 A1 = − P0
(2.11)
AT2 P2 + P2 A2 = − P1
(2.12)
then the function V (x) = xT P2 x is a common Lyapunov function
CLF for both the individual systems ẋ(t) = Ai x , i = 1, 2. Also, a
Lyapunov function for the switching system (2.10).
(3) for a given choice of the matrix P0 , the matrices A1 , A2 can be chosen
in any order in (2.11), (2.12) to yield the same solution P2 ; that is, if
AT2 P3 + P3 A2 = −P0
(2.13)
AT1 P2 + P2 A1 = −P3
(2.14)
then
(4) The matrix P2 can be expressed in integral form as
∞

Z∞
Z
T
T
P2 =
eA2 t  eA1 τ P0 eA1 τ dτ  eA2 t dt
0
Z∞
=
0
 0∞

Z
T
T
eA1 t  eA2 τ P0 eA2 τ dτ  eA1 t dt
0
Proof.
86
(1) If A1 , A2 commute, then eA1 t eA2 τ = eA2 τ eA1 t . To prove exponential
stability, let V (x) = xT P2 x. The derivative of V along the trajectories
of the system ẋ(t) = A2 x is given by
V̇ = xT (AT2 P2 + P2 A2 )x = −xT P1 x &lt; 0
which means V is a Lyapunov function for this system.
Again, the derivative of V along the trajectories of the system ẋ(t) =
A1 x is given by
V̇ = xT (AT1 P2 + P2 A1 )x
We need to prove that this is a negative definite. Substituting for P1
from (2.12) into (2.11) and using the commutativity of A1 and A2 , we
obtain
P0 = AT1 (AT2 P2 + P2 A2 ) + (AT2 P2 + P2 A2 )A1
= AT2 (AT1 P2 + P2 A1 ) + (AT1 P2 + P2 A1 )A2
(2.15)
it is required that AT1 P2 + P2 A1 &lt; 0, since A2 is stable and P0 &gt; 0.
Finally, the derivative of V = xT P2 x along the trajectories of the system
(2.10) is given by
V̇
= xT (AT (t)P2 + P2 A(t))x
= xT (AT2 P2 + P2 A2 )x = −xT P1 x &lt; 0 , A(t) = A2
= xT (AT1 P2 + P2 A1 )x &lt; 0 , A(t) = A1
Hence, V is a Lyapunov function for the switching system.
(2) Since P3 is the solution of (2.13), thus, P3 is positive definite and there
is a unique positive definite solution P4 to the Lyapunov equation
AT1 P4 + P4 A1 = −P3
(2.16)
using (2.14) and (2.16) and since A1 , A2 are commute, it follows that
P0 = AT1 (AT2 P4 + P4 A2 ) + (AT2 P4 + P4 A2 )A1
equating (2.17) with (2.15) yield,
AT2 P4 + P4 A2 = AT2 P2 + P2 A2 ⇒
87
(2.17)
Hence,
AT2 (P2 − P4 ) + (P2 − P4 )A2 = 0
since A2 is stable,P2 = P4 . Thus, the statement (2.16) is proved as a
solution to the Lyapunov equation (2.13).
(3) The solution to the Lyapunov equation (2.11) is known to be
Z∞
P1 =
T
eA1 τ P0 eA1 τ dτ
0
Hence, the solution P2 to the Lyapunov equation (2.12) is

∞
Z∞
Z
T
T
P2 = eA2 t  eA1 τ P0 eA1 τ dτ  eA2 t dt
0
0
Example 2.4.1. Consider the switched system
ẋ = Ap x, p ∈ P = {1, 2}
−1 0
−2 0
where A1 =
,
A2 =
0 −1
0 −1
Let P0 = I and A1 , A2 are commute and by solving AT1 P1 + P1 A1 = −I,
we obtain, P1 = 21 I.
Substituting of P1 into AT2 P2 + P2 A2 = −P1 gives
1 0
P2 = 8 1
0 4
Thus, we obtained the QCLF of the switched system under Theorem (2.4.1);
that is, V = xT P2 x.
MATLAB.m 5 (Example 2.4.1).
syms x1 x2
x=[ x1 ; x2]
A1=[-1 0 ; 0 -1]
A2=[-2 0 ; 0 -1]
88
P0=[1 0 ; 0 1]
% Are they commute?
syms P1 P2
res= A1 * A2 - A2*A1 % if res==0, then A1 A2 are commute
% solve lypaunov equation for P1,
P1=lyap(A1,P0)
% solve lyapunov equation for P2,
P2=lyap(A2,P1)
% the QCLF is,
syms V(x)
V(x)=transpose(x)*P2*x
Theorem 2.4.2. [13] Consider the switching system (2.10) where the matrices Ai are asymptotically stable and commute. Then,
• The system is exponentially stable for any arbitrary switching signal.
• For a given symmetric positive definite matrix P0 , let P1 , P2 , &middot; &middot; &middot; , PN
be the unique symmetric positive definite solutions to the Lyapunov
equations
ATi Pi + Pi Ai = −Pi−1 , i = 1, 2, &middot; &middot; &middot; , N
(2.18)
then the function V (x) = xT PN x is a common Lyapunov function for
each of the individual systems ẋ = Ai x, i = 1, 2, &middot; &middot; &middot; , N and hence a
Lyapunov function for the switching system (2.10).
• for a given choice of the matrix P0 , the matrices A1 , A2 , &middot; &middot; &middot; , AN can be
chosen in any order in (2.18) to yield the same solution PN .
• The matrix PN can be expressed in integral form as
∞
∞


Z∞
Z
Z
T
T
T
PN = eAN tN &middot; &middot; &middot;  eA2 t2  eA1 t1 P0 eA1 t1 dt1  eA2 t2 dt2  &middot; &middot; &middot; eAN tN dtN
0
0
0
Definition 2.4.3. Consider a positive definite continuously differentiable
function V : Rn 7→ R then we say V is a common Lyapunov function
for the switched systems (2.4) if there exists a positive definite continuous
function W : Rn 7→ R such that
∂V
fp (x) ≤ −W (x) , ∀ x ; ∀ p ∈ P
∂x
89
(2.19)
Example 2.4.2. Consider that we have fσ(t) (x) = −p x ∀ p ∈ (0, 1] for system (2.1),where each system is globally asymptotically stable and has V (x) =
1 2
x . The switched system
2
ẋ = −σ x
has the solutions
−
x(t) = e
Rt
σ(τ ) dτ
0
x(0)
thus,we see that the trajectory does not converge to zero, because
∂V
fp (x) = −p x2
∂x
does not satisfied since σ go to zero very fast.
Theorem 2.4.3. If the system (2.4) share a radially unbounded common Lyapunov function (CLF ) then the switched system
ẋ = fσ (x)
is GUAS.
Theorem 2.4.4. [Converse Theorem]
If the switched system (2.4) is GUAS, the set {fp (x) : p ∈ P} is bounded
for each x, and the function fp (x) is locally Lipschitz in x uniformly over p,
then all the system
ẋ = fp (x)
share a radially unbounded smooth common Lyapunov function.
Corollary 2.4.1. Under the assumption of Theorem (2.4.4), the convex combinations of the individual subsystems defined by the vector fields
def
fp,q,α (x) = αfp (x) + (1 − α)fq (x),
p, q ∈ P, α ∈ [0, 1]
is globally asymptotically stable.
Remark:
A convex combination of two asymptotically stable vectors is not necessary
asymptotically stable.
90
Example 2.4.3. Consider the two matrices
−0.1 −1
−0.1
2
A1 =
, A2 =
2
−0.1
−1 −0.1
both of these two matrices are Hurwitz, but their convex combinations are
not. Thus, the switching of this system is not GUAS.
2.4.3
Switched Linear Systems
Consider the LTI system
ẋ = Ax(t)
recall that this system is asymptotically stable if the matrix A be a Hurwitz
matrix; i.e, the eigenvalues of the matrix A lie in the open left-half of the
complex plane. Moreover, there exists a quadratic Lyapunov function
V (x) = xT P x
where P is a positive definite symmetric matrix, which satisfy the inequality
AT P + P A ≤ −Q
(2.20)
The stability of the switched linear system is similarly to the classical Lyapunov stability concept.
Definition 2.4.4. For a switched linear system
ẋ = Ai x
Assume that {Ai : i ∈ P} is a compact set of Hurwitz matrices. Then, we say
the system has a quadratic common Lyapunov function QCLF of the form
V (x) = xT P x
such that for some positive definite symmetric matrix Q
ATi P + P Ai ≤ −Q , ∀ p ∈ P
holds.
91
Theorem 2.4.5. Consider the switched linear system
ẋ = Aσ(t) x(t)
(2.21)
the switched linear system (2.21) is GUES if and only if it is locally attractive
for every switching signal.
The example below demonstrates that in spite of the switched systems is
GUES, but it does not imply the existence of a quadratic common Lyapunov
function (QCLF).
Example 2.4.4. Assume that the following two matrices are Hurwitz
−1 − 1
−1 − 10
A1 =
A2 =
1−1
0.1 − 1
The systems ẋ = A1 x and ẋ = A2 x do not share a common Lyapunov function. To see this fact, we can look for a positive definite symmetric matrix P
of the form
1 q
P =
q r
which satisfies the inequality (2.20), we have
2 − 2q
2q + 1 − r
T
−A1 P − P A1 =
2q + 1 − r 2q + 2r
this is positive definite if and only if
q2 +
(r − 3)2
&lt;1
8
(2.22)
Similarly,
−AT2 P
2 − 5q
2q + 10 −
− P A2 =
q
10
r
2q + 10 − 10
20q + 2r
this is positive definite if and only if
q2 +
(r − 300)2
&lt; 100
800
(2.23)
Both inequalities (2.22), (2.23) representing an ellipsis form. We see that
the interior of both ellipsis do not intersect as depicted in Figure (2.10).
Therefore, a quadratic common Lyapunov function does not exists.
92
Figure 2.10: Ellipsis in example 2.4.4
Although there is no common Lyapunov function,the switched linear system ẋ = Aσ x is GUES.
Corollary 2.4.2 ([2],p51). The linear systems ẋ = A1 x and ẋ = A2 x share
a quadratic common Lyapunov function (QCLF) if and only if all pairwise
−1
of the convex combinations of the matrices A1 ,A2 ,A−1
1 , and A2 are Hurwitz.
So that V (x) = xT P x is a quadratic common Lyapunov function for the
systems ẋ = Ax and ẋ = A−1 x and hence for all their convex combinations.
Proof. Note that
AT P + P A = −Q
implies
(A−1 )T P + P A−1 = −(A−1 )T QA−1
for every Hurwitz matrix A.
2.4.4
Solution of a Linear Switched System Under Arbitrary Switching Signal
Consider the switched linear system
ẋ = Aσ x ,
σ : [0, ∞) 7→ P
and σ(t) = p, p ∈ P = {1, 2}
Assume that the matrices A1 and A2 are commute. Now, suppose that we
applied an arbitrary switching signal σ, and denote by ti and τi the length
93
of the time interval where the current subsystem is still active where σ equal
1 or 2, respectively (as shown in Figure 2.11).
Figure 2.11: Switching between two systems under arbitrary switching signal
The solution under this arbitrary switching signal is
x(t) = &middot; &middot; &middot; eA2 τ2 eA1 t2 eA2 τ1 eA1 t1 x(t0 )
Since [A1 , A2 ] = 0. We obtain,
eA1 t eA2 τ = eA2 τ eA1 t
Thus, the solution can be written as
x(t) = &middot; &middot; &middot; eA2 τ2 eA2 τ1 eA1 t2 eA1 t1 x(t0 )
(2.24)
(2.25)
Hence, according to relation (2.24), we can rewrite (2.25) as
x(t) = eA2 (τ1 +τ2 +&middot;&middot;&middot; ) eA1 (t1 +t2 +&middot;&middot;&middot; ) x(t0 )
2.4.5
(2.26)
Commuting Nonlinear Systems
Consider the linear vector fields f1 (x) = A1 x and f2 (x) = A2 x. According
to Lie bracket, or commutator of two vector fields, we have the following
definition
∂f2 (x)
∂f1 (x)
[f1 , f2 ] :=
f1 (x) −
f2 (x)
∂x
∂x
After substitution, the right-hand side becomes (A2 A1 − A1 A2 )x, which is
consistent with the definition of the Lie bracket of two matrices (2.7) except
for the difference in sign.
Theorem 2.4.6. If {fp : p ∈ P} is a finite set of commuting continuous differentiable(functions) vector fields and the origin is a globally asymptotically
stable equilibrium for all systems in the family ẋ = fp (x), p ∈ P, then the
corresponding switched system (2.5) is GUAS.
94
2.4.6
Commutation and The Triangular Systems
Theorem 2.4.7. If {Ap : p ∈ P} is a compact set of upper-triangular Hurwitz
matrices, then the switched linear system (2.21) is GUES.
Example 2.4.5. Suppose that we have
ẋ = Ap x(t),
p ∈ P = {1, 2} , x ∈ R2
and consider the two matrices
−a1 b1
A1 =
0 − c1
−a2 b2
A2 =
0 − c2
Assume that ai , ci &gt; 0, i = 1, 2; thus, the eigenvalues have negative real
parts. Now, under arbitrary switching signal σ, the second component of x
corresponding to
x˙2 = −cσ x2
The obtained general solution of this equation with initial value x2 (0) = x20
is,
x2 = x20 e−cσ t
Therefore, x2 decays to zero exponentially fast with rate of decay corresponding to min {c1 , c2 }.
Similarly, the the first component of x corresponding to
ẋ1 = −aσ x1 + bσ x2
We can look at this equation consists of two parts. The exponentially stable
system ẋ1 = −aσ x1 perturbed by the exponentially decaying input bσ x2 . Thus,
x1 converges to zero exponentially fast.
2.5
2.5.1
Stability Under Constrained Switching
Multiple Lyapunov Functions (MLF)[16]
Consider the switched system
ẋ = fσ(t) (x) , where σ(t) = ρ, ρ ∈ P = {1, 2}
95
(2.27)
Assume that both ẋ = f1 (x) and ẋ = f2 (x) are asymptotically stable, and
let V1 ,V2 be their respective radially unbounded Lyapunov functions.
We are interested in the situation where a CLF for the two systems not
known or does not exist.
Suppose ti representing a set of switching times where i = 1, 2, &middot; &middot; &middot; .
• If it happens that the values of V1 (x(t)),V2 (x(t)) coincide at each switching time; i.e, Vσ(ti−1 ) (ti ) = Vσ(ti ) (ti ) for all i, then Vσ is a continuous
Lyapunov function for the switched system and asymptotic stability
follows (see Figure 2.12 (a)).
Figure 2.12: Solid graphs correspond to V1 , dashed graphs correspond to
V2 :(a) continuous Vσ , (b) discontinuous Vσ .
While each Vp decreases when the pth subsystem is active, it may be increases
when the pth subsystem is inactive, as depicted in Figure 2.12(b).
For the system to be asymptotically stable, the values of Vp should form
a decreasing sequence for each p at the beginning at each interval of the
switching signal.
Theorem 2.5.1. [26] Consider the switched system described by (2.27). Suppose that we have a multiple Lyapunov function Vi for each individual subsystems. Let S = {ti , &middot; &middot; &middot; , t1 , t0 } be the set of all switching sequences associated
with the system. Let Q be an arbitrary subset of S. If for each M ∈ Q, the
following conditions satisfied,
96
• There exists at least one Vi , such that:
V̇i ≤ 0 , ∀ t ∈ S, i 6= j ,
t ∈ {the set of time interval during which subsystem i is activate}
(2) Vi (ti+1 ) ≤ Vi (ti )
(1)
where ti+1 , ti are two consecutive switch-on instants of subsystem i.
• For all other Vj , (j 6= i),
there exists a positive constant &micro;, such that
(3) |Vj (xj (t))| ≤ &micro;|Vi (xi (t∗ ))| , i 6= j
where t included in a subset of time interval during which subsystem j
is activate, and t∗ = max{ti : ti &lt; tj }.
i
then, the switched system is stable for all switching in Q. Also, if
• V̇i &lt; 0.
• the sequence {Vi } converges to zero as t 7→ 0.
then, the switched system is asymptotically stable.
Assume V1 (x1 (t)) and V2 (x2 (t)) are two multiple Lyapunov candidate
functions. Assume V1 (x1 ) satisfy conditions (1) and (2), whereas V2 (x2 )
satisfy condition (3)and as shown in Figure 2.13, V2 may increase or decrease
during active time intervals. If V̇2 ≥ 0 and if subsystem 2 is activate at initial
time, then V2 can only reach a finite value when subsystem 1 is switched on
due to condition (3) since V2 is upper bounded by &micro;V1 (x1 (t)).
97
Figure 2.13: Multiple Lyapunov Stability, where m ≡ &micro;
Theorem 2.5.2. [2] Consider the system (2.27) and let {Vp (x) : p ∈ P}
be a set of radially unbounded real positive definite functions, where P =
{1, 2, &middot; &middot; &middot; , n}. Suppose there exists a family of positive definite continuous
functions Wp with the property that for every pair of switching times (ti , ti+1 ),
for i &lt; i+1 such that σ(ti+1 ) = σ(ti ) = p ∈ P, and σ(tk ) 6= p ∀ ti &lt; tk &lt; ti+1 ,
we have
Vp (x(ti+1 )) − Vp (x(ti )) ≤ −Wp (x(ti ))
(2.28)
then the switched system is globally asymptotically stable.
Proof. [16]
Since Vp (x(ti+1 )) − Vp (x(ti )) &lt; 0 where x(ti+1 ), x(ti ) 6= 0, then by virtue
of the fact that the sequence {Vp (x(ti ))} is strictly decreasing and lower
bounded by zero, then the limit lim Vp (x(ti )) = L ≥ 0 exists. By continuity,
i 7→∞
Vp (x(ti+1 )) − Vp (x(ti )) &lt; 0 ⇒
lim Vp (x(ti+1 )) − lim Vp (x(ti )) = L − L = 0
i 7→∞
i 7→∞
but,
lim Vp (x(ti+1 )) − lim Vp (x(ti )) = lim [Vp (x(ti+1 )) − Vp (x(ti ))]
i 7→∞
i7→∞
i 7→∞
≤ lim [−W (x(ti ))]
i 7→∞
≤ 0,
where W (x(t)) is positive definite function.
98
In other words, lim [−W (x(ti ))] 7→ 0 which imply lim x(ti ) 7→ 0. It follows
i 7→∞
i 7→∞
from the Lyapunov asymptotic stability that x(t) 7→ 0 as t 7→ ∞.
Assume that W (x(ti )) ≡ γ||x(ti )||2 , then from Theorem (2.5.2), we have
the following corollary:
Corollary 2.5.1. [16] Consider the system (2.27) and let {Vp (x) : p ∈ P}
be a set of radially unbounded real positive definite functions, where P =
{1, 2, &middot; &middot; &middot; , n}. There exists constant γ &gt; 0 such that
Vp (x(ti+1 )) − Vp (x(ti )) ≤ −γ||x(ti )||2
(2.29)
then the switched system is globally asymptotically stable.
2.5.2
Stability Under State-Dependent Switching
In state-dependent switching, a switching events can occur when the trajectories hit a switching surfaces. In this case, the stability is concern in
observing the behavior of each individual subsystem in its region where this
system is active.
Let us define a state-dependent switched linear system,
(
A1 x
if x1 ≥ 0 ≡ (Ω1 )
ẋ =
A2 x
if x1 &lt; 0 ≡ (Ω2 )
where
−0.1 −1
A1 =
2
−0.1
−0.1 −2
A2 =
1
−0.1
Consider the two positive definite symmetric matrices
2 0
P1 =
0 1
1 0
P2 = 2
0 1
99
where V1 = xT P1 x and V2 = xT P2 x are Lyapunov functions candidates for
the above systems.
Definition 2.5.1. It is enough to require that each function Vi decreases
(V̇ &lt; 0) along solution the ith subsystem in the region Ωi where this system
is active.It is important to note that V serves as a Lyapunov function only
in suitable regions for each subsystem.
Remark:
The multiple Lyapunov functions can be used, if the stability analysis not
based on a single Lyapunov function.
Example 2.5.1.
Consider
switched
theautonomous
system ẋ = Aσ(t) x(t) and
0 10
1.5
2
A1 =
A2 =
0 0
−2 −0.5
where σ(t) =

1, if σ(t− ) = 2 and x2 (t) = −0.25x1 (t)
2, if σ(t− ) = 1 and x2 (t) = +0.50x1 (t)
√
and the eigenvalues are, λ(A1 ) = 0 and λ1,2 (A2 ) = 0.5 &plusmn; i 3
Thus, both systems are unstable and the phase-plane portrait of the individual
systems depicted in Figure 2.14.
These trajectories can be pieced together in one region Ωi to produce a stable
state trajectory as shown in Figure 2.15. This can be done by constructing
multi-Lyaunov functions for A1 , A2 as V1 (x) = xT P1 x and V2 (x) = xT P2 x.
We define the region :

n
o
T
T
Ωi = x : V̇i (x) = x (Ai Pi + Pi Ai )x ≤ 0 , ∀ i = 1, 2
By solving the LMI of (ATi Pi + Pi Ai ≤ −I) such that V̇i (x) decreasing, we
obtain
0.46875 −1.875
1 1.2
, P2 =
P1 =
−1.875
15
1.2 1.6
100
Figure 2.14: Dashed line for ẋ = A1 x and solid line for ẋ = A2 x.
Figure 2.15: trajectory resulting from switching between A1 and A2 . The
lines x2 = 0.5x1 and x2 = −0.25x1 lie in Ω1 ∪ Ω2 .
MATLAB.m 6 (Example 2.5.1).
101
clear
A1=[0 10 ; 0 0 ]
syms x1 x2
x=[ x1 ; x2 ]
SW1=A1∗x
A2= [ 1 . 5 2 ; −2 −0.5]
SW2=A2∗x
xSPAN= −2:0.5:2
LINE1= −0.25∗xSPAN
LINE2= 0 . 5 ∗xSPAN
%s w i t c h e d system 1
[ x1 , x2 ]= meshgrid ( − 2 : 0 . 5 : 2 , − 2 : 0 . 5 : 2 ) ;
x1dot=e v a l (SW1 ( 1 ) ) ;
x2dot=e v a l (SW1( 2 ) ) ∗ x1 ;
p l o t (xSPAN, LINE1 , ’ − ’ )
hold on
p l o t (xSPAN, LINE1 , ’ ∗ g ’ )
hold on
q u i v e r ( x1 , x2 , x1dot , x2dot , ’ r ’ ) ;
hold on
% s w i t c h e d system 2
x1dot=e v a l (SW2 ( 1 ) ) ;
x2dot=e v a l (SW2 ( 2 ) ) ;
p l o t (xSPAN, LINE2 , ’ − ’ )
hold on
p l o t (xSPAN, LINE2 , ’ ∗ r ’ )
hold on
q u i v e r ( x1 , x2 , x1dot , x2dot , ’ b ’ ) ;
% p l o t t he two s y st em s s t a t e −models
f = @( t , y ) [ ( 3 / 2 ) ∗ y (1)+(2∗ y ( 2 ) ) ; − 2 ∗ y (1) −(1/2)∗ y ( 2 ) ]
hold on
y20 =0.1
y10 =0.1
t 0=0
t 1=5
[ t s , ys ] = ode45 ( f , [ t0 , t 1 ] , [ y10 ; y20 ] ) ;
102
p l o t ( ys ( : , 1 ) , ys ( : , 2 ) , ’ − g ’ )
2.6
2.6.1
Stability under Slow Switching (Time Domain Constraints)
Introduction
Theorem 2.6.1. [18] We say that the system
ẋ = f (x) , ∀ f (t0 ) = 0 , t0 = 0
is globally exponentially stable with decay rate λ &gt; 0 if
||x(t)|| ≤ k e−λ t ||x0 ||
holds for any x0 , ∀ t ≥ 0 and a positive constant k &gt; 0.
Theorem 2.6.2. [15] The switched system
ẋ = Aσ(t) x(t),
x(t0 ) = x0
(2.30)
is said to be globally exponentially stable with stability degree λ ≥ 0
if
||x(t)|| ≤ eα−λ(t−t0 ) ||x0 ||
holds ∀ t ≥ t0 and a known constant α.
Note that Theorem (2.6.1) is similar to Theorem (2.6.2), if we set k = eα .
2.6.2
Dwell Time
The concept of Dwell-Time allowing the possibility of switching fast when
necessary and compensating for switching slowly later to complete the strategy that making the switched system stable.
Definition 2.6.1 ([2]). Dwell-Time represents the minimum time interval
between two successive switch instants that insures the stability of the switched
system. Dwell-time is denoted by a number τd &gt; 0 that forcing the switching
signal to establishing property that can making the switching times t1 , t2 , &middot; &middot; &middot;
satisfy the inequality
ti − ti−1 ≥ τd , ∀ i
103
Theorem 2.6.3. [17] Consider the switched system
ẋ = Ai x ∀i ∈ P
(2.31)
If we assume that all the individual subsystems are asymptotically stable (i.e
when all subsystem matrices Ai are Hurwitz stable) then the switched
system is exponentially stable if and only if the dwell − time is sufficiently large to allow each subsystem reaching the steady-state.
Proof.
Let Φi (t, τ ) denote the transition matrix solution of the system (2.31).
Since all the subsystems are asymptotically stable, then according to Lyapunov stability theorems, one can find k &gt; 0 and λ0 &gt; 0 such that
||Φi (t, τ )|| ≤ k e−λ0 (t−τ ) ,
t ≥ τ ≥ 0, ∀ i ∈ P = {1, 2, &middot; &middot; &middot; , N }
where N =number of the switched systems , λ0 = max λi and k = max ki
i∈P
i∈P
represent the common decay rate for the family of subsystems. Since λi ,ki
are constants that define the convergence of each subsystem.
Now, consider the sequences {t1 , t2 , &middot; &middot; &middot; , ti } denote the switching instants in
the time interval (τ, t) such that,
ti − ti−1 ≥ τd
The solution of the system at a given instant of time t is given by (refer to
Appendix for Theorem 4.13.3),
x(t) = Φσ(ti ) (t, ti )Φσ(ti−1 ) (ti , ti−1 )Φσ(ti−2 ) (ti−1 , ti−2 ) &middot; &middot; &middot;
Φσ(t1 ) (t2 , t1 )Φσ(τ ) (t1 , τ ) x(τ )
on the interval [t, t0 ]. Note that,
Φσ(ti ) (t, ti )Φσ(ti−1 ) (ti , ti−1 )Φσ(ti−2 ) (ti−1 , ti−2 ) &middot; &middot; &middot;
Φσ(t1 ) (t2 , t1 )Φσ(τ ) (t1 , τ ) x(τ ) = Φi (t, τ ) x(τ )
Also,
Φ(t, τ ) = eAσ(ti ) (t−ti ) eAσ(ti−1 ) (ti −ti−1 ) eAσ(ti−2 ) (ti−1 −ti−2 ) &middot; &middot; &middot; eAσ(τ ) (t1 −τ )
104
,∀t ≥ τ
Hence, the transition matrices Φσ between two successive switching times
satisfy the inequality:
||Φσ(ti−1 ) (ti , ti−1 ) || ≤ k e−λ0 (ti −ti−1 ) ≤ k e−λ0 τd
Since λ ∈ (0, λ0 ), we obtain
||Φσ(ti−1 ) (ti , ti−1 ) || ≤ k e−λ0 (ti −ti−1 ) ≤ k e−λ0 τd ≤ e−τd (λ0 −λ)
which means the system is asymptotically stable if and only if k e−(λ0 −λ) τd ≤
1.
Then, by taking the logarithm on both sides of the inequality, this condition
can be written as
ln k
τd ≥
,
∀λ ∈ (0, λ0 )
(2.32)
λ0 − λ
This is the desired lower bound on the dwell time.
Theorem 2.6.4. [2]
Assume that all the systems of the switched system
p∈P
x = fp (x),
(2.33)
are globally exponentially stable. Thus, there exists a Lyapunov candidate function Vp for each p ∈ P which has some positive constants ap , bp and
cp satisfies:
ap ||x||2 ≤ Vp (x) ≤ bp ||x||2
V̇ (x) ≤ −cp ||x||2
(2.34)
(2.35)
Moreover,
there exists a constant scalar &micro; ≥ 1 such that
Vp (x(ti )) ≤ &micro;Vq (x(ti )),
∀ x(t) ∈ Rn , ∀ p, q ∈ P
(2.36)
Proof. Assume all the system in the family (2.33) are globally exponentially
stable with sufficient large dwell time τd . Then there exists a Lyapunov
function Vp for each p ∈ P satisfies the first two inequalities of Theorem
(2.6.4).
105
Combining the right-hand side of inequality (2.34) with inequality (2.35), we
obtain
cp
V̇p ≤ −λp Vp , where λp =
bp
By integration for t ∈ [t0 , t0 + τd ),
Vp (x(t0 + τd ) ≤ e−λp τd Vp (t0 )
(2.37)
Without loss of generality, we can see that inequality (2.37) implies :
Vp (xi+1 ) ≤ Vp (xi )
(2.38)
if τd is large enough.
To simplify, let us consider the case when σ = 1 on the interval [t0 , t1 ) and
σ = 2 on the interval [t0 , t2 ), where P = {1, 2} and ti+1 − ti ≤ τd ,i = 0, 1.
From the above inequality we have,
For σ = 1 on t ∈ [t0 , t1 ):
V1 (x(t1 )) ≤ e−λ1 τd V1 (x(t0 ))
a1 ||x||2 ≤ V1 (t1 ) ≤ b1 ||x||2
a1 ||x||2 ≤ V1 (t2 ) ≤ b1 ||x||2
(2.39)
(2.40)
(2.41)
For σ = 2 on t ∈ [t1 , t2 ):
V2 (x(t2 )) ≤ e−λ2 τd V2 (x(t1 ))
a2 ||x||2 ≤ V2 (t1 ) ≤ b2 ||x||2
a2 ||x||2 ≤ V2 (t2 ) ≤ b2 ||x||2
(2.42)
(2.43)
(2.44)
Now by combining the left-hand side of inequality (2.40) with the right-hand
side of inequality (2.43) with help of inequality (2.39), we obtain
V2 (t1 ) ≤
b2
b2
V1 (t1 ) ≤ e−λ1 τd V1 (x(t0 ))
a1
a1
(2.45)
Thus, we conclude that
V1 (x(t1 )) &lt; V1 (x(t0 ))
if and only if τd &lt;&lt;&lt; 1 (i.e; τd is large enough).
In addition, from the last inequalities, we obtain
V1 (t2 ) ≤
b1
b1
b1 b2 −(λ1 +λ2 )τd
V2 (t2 ) ≤ e−λ2 τd V2 (t1 ) ≤
e
V1 (t0 )
a2
a2
a2 a1
106
(2.46)
Hence, from the left-hand side of inequality (2.46) we conclude that,
Vj (t) ≤ &micro; Vi (t) ,
2.7
&micro;≥1
Stability of Switched System When all
Subsystems are Hurwitz
Consider the linear subsystems described by
ẋ = Ai x(t) , i = 1, 2, &middot; &middot; &middot; , n
(2.47)
where n ≥ 2, Ai ∈ Rn&times;n , and x(t) ∈ Rn .
Theorem 2.7.1. [2] Consider the switched subsystems (??) have the following assumptions:
• Assumption 1: The matrices Ai are all Hurwitz stable (i.e, all the eigenvalues lie in the left-half complex plane).
• Assumption 2: All the matrices Ai are commutative; that is,
Ai Aj = AjAi , ∀i 6= j
• Assumption 3: Let {ik } ⊂ {1, 2, &middot; &middot; &middot; , n} for k ≥ 0 denotes the switching
sequence and let [tk , tk+1 ] denotes the time period that required to set
the ik -th subsystem activated, then assume that
lim tk = ∞
k7→∞
Then the origin equilibrium point of the switched system is globally exponentially stable under arbitrary switching laws satisfying Assumption 3
Proof.
Since the Ai ’s are Hurwitz, there exist constants αi , β &gt; 0 such that for each
i = 1, 2, &middot; &middot; &middot; , n the following inequality holds
||eAi t || ≤ αi e−βt
107
(2.48)
Now, for any initial condition (x0 , t0 ) the solution of (2.47) is
x(t) = x0 eAi (t−t0 )
since t ∈ [tk , tk+1 ], we have
x(t) = eAik (t−tk ) eAik−1 (tk −tk−1 ) eAik−2 (tk−1 −tk−2 ) &middot; &middot; &middot; eAi1 (t2 −t1 ) eAi0 (t1 −t0 ) x(t0 )
If we let Ti (t, t0 ) , i = 1, 2, &middot; &middot; &middot; , n denotes the total time that the i-th subsysn
P
tem still activated during the time t0 to t, then
Ti (t, t0 ) = t − t0 , we can
i=1
rewrite the above expression as
x(t) = x0 eAn Tn (t,t0 ) &middot; &middot; &middot; eA1 T1 (t,t0 )
Since the Ai ’s are commutative, then we can write the above expression as
x(t) = x0 eA1 T1 (t,t0 ) &middot; &middot; &middot; eAn Tn (t,t0 )
(2.49)
By applying (2.48), yields
||x(t)|| ≤ ||x0 ||
n
Y
!
αi
e−β(T1 (t,t0 )+&middot;&middot;&middot;+Tn (t,t0 ))
i=1
which implies that the switched system (2.47) is globally exponentially stable
under arbitrary switching laws.
Corollary 2.7.1.
If the matrices Ai are not commute then the switched system (2.47) is not
globally exponentially stable under arbitrary switching laws.
Theorem 2.7.2. [2] Consider the switched nonlinear systems
ẋ(t) = Ai x(t) + fi (t, x(t)) , i = 1, 2, &middot; &middot; &middot; , n
(2.50)
where the perturbation term fi is vanishing in the sense that
||fi (t, x(t)) || ≤ γ||x(t)|| , i = 1, 2, &middot; &middot; &middot; , n
(2.51)
Assume that αi , β, γ &gt; 0 such that condition (2.48) holds. If hypotheses
(assumption 1 and 2 of Theorem (2.7.1)) are true, then for switched system
108
(2.50) with initial condition (t0 , x0 ), under arbitrary switching law satisfying
(Assumption 3), it is true that
||x(t)|| ≤ K0 ||x0 || e−(β−K0 γ)(t−t0 )
where K0 =
n
Q
αi . Therefore, under the condition
i=1
β
K0
γ&lt;
then the equilibrium point is globally exponentially stable
Proof.
Since the general solution of (2.50) takes the form
x(t) = x0 eAi (t−t0 ) +
Zt
eAi (t−τ ) fi (τ, x(τ )) dτ
t0
For t ∈ [tk , tk+1 ], we have
Ai0 (t1 −t0 )
x(t1 ) = x(t0 ) e
Zt1
+
eAi0 (t1 −τ ) fi 0 (τ, x(τ )) dτ
t0
Ai0 (t2 −t1 )
x(t2 ) = x(t1 ) e
Zt2
+
eAi0 (t2 −τ ) fi 1 (τ, x(τ )) dτ
t1
Ai1 (t2 −t1 )+Ai0 (t1 −t0 )
= x(t0 ) e
Zt1
+
t0
Zt1
+
eAi1 (t2 −τ ) fi 1 (τ, x(τ )) dτ
t2
109
eAi1 (t2 −t1 )+Ai0 (t1 −τ ) fi 0 (τ, x(τ )) dτ
By induction, we have
x(t) = x(t0 ) eAik (t−tk )+Aik−1 (tk −tk−1 )+&middot;&middot;&middot;+Ai1 (t1 −t0 )
Zt1
+ eAik (t−tk )+Aik−1 (tk −tk−1 )+&middot;&middot;&middot;+Ai0 (t1 −τ ) fi 0 (τ, x(τ )) dτ
t0
Zt2
+
eAik (t−tk )+Aik−1 (tk −tk−1 ) + &middot; &middot; &middot; + Ai1 (t2 − τ )fi 1 (τ, x(τ )) dτ
t1
Ztk
+ &middot;&middot;&middot; +
eAik (t−tk )+Aik−1 (tk −τ ) fi k−1 (τ, x(τ )) dτ
tk−1
Zt
+
eAik (t−τ ) fi k (τ, x(τ )) dτ
(2.52)
tk
Grouping the elapsed time interval for each subsystem as we did in Theorem
(2.7.1), we obtain

Zt1
||x(t)|| ≤ K0 ||x0 || e−β(t−t0 ) + e−β(t−τ ) γ||x(τ )|| dτ + &middot; &middot; &middot;
t0
Ztk
+
e−β(t−τ ) γ||x(τ )|| dτ +
tk−1
Zt

e−β(t−τ ) γ||x(τ )|| dτ 
tk
Therefore,
||x(t)|| eβt ≤ K0 ||x0 || eβt0 + K0
Zt
eβτ γ||x(τ )|| dτ
t0
using Gronwall inequality (refer to appendix for more details), yields
||x(t)|| ≤ K0 ||x0 || eβt0 eγK0 (t−t0 ) e−βt
Thus,
||x(t)|| ≤ K0 ||x0 || e−(β−K0 γ)(t−t0 )
110
2.7.1
State-Dependent Switching
Switching of a state-dependent switched system occurs when the trajectory
hits a switching surface as described in section (2.2.1).
Stability Analysis Based on Single Lyapunov Function
Proposition 2.7.1. Consider a state-dependent switched linear system
ẋ = Ap x , p ∈ P
and this switched system have a common Lyapunov function V (x) = xT P x
satisfies V̇ &lt; 0 where the Lyapunov function decrease along all nonzero solutions of each subsystem system. Then, we say the switched system is globally
asymptotically stable.
Example 2.7.1. Assume we have two matrices defined as
0 −1
0 −2
A1 =
, A2 =
2 0
1 0
Now, define a state-dependent switched linear system in the plane by

A1 x , if x1 x2 ≤ 0
ẋ =

A2 x , if x1 x2 &gt; 0
Let V (x) = xT x be a Lyapunov function candidate. It is easy to see that
V̇ &lt; 0. Thus, V (x) decreases along solutions of each subsystem. Hence we
have global asymptotic stability even if the individual subsystems are unstable.
Stability Analysis Based on Multiple Lyapunov Functions:
In fact, from the previous example, there is no global common Lyapunov
function for the two matrices since they are not commute; i.e, [A1 , A2 ] 6= 0.
Consider the two positive definite symmetric matrices
1 2 0
0
P1 =
, P2 = 2
0 1
0 1
111
and let Vσ (x) = xT Pσ x are Lyapunov functions for the switched systems
ẋi = Ai x. Let us define a switching surface S = {x : x1 = 0} such that
the function σ takes the value σ = 1 f or x1 ≥ 0 and σ = 2 f or x2 ≤ 0.
Hence, it is required that each function Vσ decreases along solutions of the
pth subsystem; i.e,
xT (ATp Pp + Pp Ap )x &lt; 0 , ∀ x 6= 0
Proposition 2.7.2. Consider we have two linear systems ẋ = A1 x and
ẋ = A2 x associate with V1 (x) = xT P1 x and V2 (x) = xT P2 x, respectively.
Assume
that V1 decreases along its solution in a conic region
n V̇1 &lt; 0 such
o
Ω1 = x : V̇1 &lt; 0 . Also, in similar approach, V2 decreases along its solution
n
o
in a conic region Ω2 = x : V̇2 &lt; 0 . Then,
(1) In case of no sliding motion occurs on the switching surface S :=
x : x T P1 x = x T P2 x :
if
xT (AT1 P1 + P1 A1 )x &lt; 0 whenever xT P1 x ≤ xT P2 x and x 6= 0
and
xT (AT2 P2 + P2 A2 )x &lt; 0 whenever xT P1 x ≥ xT P2 x and x 6= 0
then the function Vσ is continuous and decreases along solutions of the
switched system. Thus, the system is globally asymptotically stable.
(2) In case
of sliding motion occurs on the switching surface
S := x : xT P1 x = xT P2 x :
The existence of a sliding mode characterized by the inequalities
xT AT1 (P1 − P2 ) + (P1 − P2 )A1 x ≥ 0 ∀ x ∈ S
and
xT AT2 (P1 − P2 ) + (P1 − P2 )A2 x ≤ 0 ∀ x ∈ S
since a sliding motion occurs, then σ is not defined. So, without loss
of generality, let σ = 1 on S and suppose that V1 decreases along the
corresponding Filippov solution. We obtain,
xT (αA1 + (1 − α)A2 )T P1 + P1 (αA1 + (1 − α)A2 ) x
= αxT (AT1 P1 + P1 A1 )x + (1 − α)xT (AT2 P1 + P1 A2 )x
≤ αxT (AT1 P1 + P1 A1 )x + (1 − α)xT (AT2 P2 + P2 A2 )x
112
Therefore, the switched system is globally asymptotically stable.
113
Chapter 3
Stability of switched system
under switching signal when
subsystems are Hurwitz and
non-Hurwitz
3.1
Introduction
Studying the stability of switched systems when the system consists of both
stable and unstable subsystems required to apply the concept of dwell time
under a new cover which is so-called the average dwell time (ADT).
Handling the case when all subsystems are unstable is a big challenge to
approach. Stabilization of switched systems with unstable subsystems under
the slow switching will dissipate the effect of the transition effect at each
switching. Thus, the decrement of the Lyapunov function during the ADT
compensates the possible increment of the Lyapunov function at switching
instance. Stabilization of unstable subsystems based on computing a minimal
114
3.2
Stability of Switched System When all
Subsystems are Unstable Except the corresponding Negative Subsystems Matrices (−Ai) are Hurwitz
Theorem 3.2.1. Consider the switched subsystems
ẋ = Ai x(t) , i = 1, 2, &middot; &middot; &middot; , where Ai ∈ Rn&times;n , n ≥ 2
(3.1)
have the following assumptions:
• Assumption 1: The matrices Ai are all unstable.
• Assumption 2: All the matrices Ai are commutative; that is,
Ai Aj = AjAi , ∀i 6= j
• Assumption 3: Let {ik } ⊂ {1, 2, &middot; &middot; &middot; , n} for k ≥ 0 denotes the switching
sequence and let [tk , tk+1 ] denotes the time period required to set the ik th subsystem activated. Then assume that
lim tk = ∞
k7→∞
(, i.e; t0 &lt; t1 &lt; &middot; &middot; &middot; &lt; tk )
• Assumption 4: −Ai is Hurwitz stable for i = 1, 2, &middot; &middot; &middot; , n
In addition, assume that there exist constants αi , β, γ &gt; 0 such that condition
kfi (t, x(t))k ≤ γkx(t)k , i = 1, 2 &middot; &middot; &middot; , n
and
||e−Ai t || ≤ αi e−βt , i = 1, 2, &middot; &middot; &middot; , m
hold.
Then for switched system
ẋ(t) = Ai x(t) + fi (t, x(t)) , i = 1, 2, &middot; &middot; &middot; , n
(3.2)
with initial condition (t0 , x0 ), and under arbitrary switching law satisfying
Assumption 3, it is true that
||x|| ≥ ||x0 ||
Therefore, if β &gt;
γ
,
K0
1 (β− Kγ )(t−t0 )
0
e
K0
then the switched system is exponentially unbounded.
115
3.2.1
The purpose of the average dwell time (ADT) which is denoted by (τa ), is to
allow the possibility of switching fast when necessary and then compensate
for the non possibility to switching fast to be switching sufficiently slowly
later [2]. Thus, the ADT defines restricted class of switching signals that
guarantee stability of the whole system. In instance, subsystems that produce similar trajectories can be switched more rapidly without yielding to
instabilit [19], when giving rise to a smaller dwell time τ ∗ .
Consider autonomous linear switched system
ẋ(t) = Aσ(t) x(t) , x(t0 ) = x0
(3.3)
where x(t) : [0, ∞) 7→ Rn , σ(t) : [t0 , ∞) 7→ p ∈ P = {1, 2, &middot; &middot; &middot; , N } is
a piecewise constant function of time, called switching signal, and Aσ(t) :
[t0 , ∞) 7→ {A1 , A2 , &middot; &middot; &middot; , AN } are number of the subsystems matrices which
consist of Hurwitz stable and unstable subsystem matrices.
Assume that {A1 , A2 , &middot; &middot; &middot; , Ai } for (i ≤ r) are unstable subsystems exits in
(3.3) and Ai for (N ≥ i &gt; r) are Hurwitz stable matrices.
Then, there exist a set of positive scalars λi and αi that make the system
exponentially stable with the following inequalities hold:
||eAi t| || ≤ eαi +λi t , 0 ≤ i ≤ r
||eAi t || ≤ eαi −λi t , r &lt; i ≤ N
(3.4)
(3.5)
Now,at this time, we aim to derive a switching law that incorporates the
ADT such that the switched system (3.3) is exponentially stable.
Definition 3.2.1. [20]
For a switching signal σ(t) and for any t &gt; t0 ≥ 0, let Nσ (t, t0 ) denotes the
number of switching over the interval (t0 , t). We say that σ(t) has the ADT
property, if
t − t0
(3.6)
Nσ (t, t0 ) ≤ N0 +
τa
holds for N0 ≥ 1 and τa &gt; 0.
Theorem 3.2.2. [21] Consider the switched system (3.3),and under the
switching law :
Ts (t)
λu − λ∗
≥ s
(3.7)
Tu (t)
λ − λ∗
116
where Ts , Tu denote the total activate time of Hurwitz stable subsystems to
the total activate time of Hurwitz unstable subsystems during [t0 , t), λu =
max λi ,λs = min λi , λ ∈ (0, λs ) and λ∗ ∈ (λ, λs ).
i≤r
i&gt;r
Then, there is a finite constant τa∗ such that the switched system (3.3) is
globally exponentially stable with stability degree λ over the set of all switching
signals computed by the condition (3.6) for any τa (ADT) satisfies τa ≥ τa∗
and any chatter bound N0 ≥ 0.
Proof. Let t0 , t1 , &middot; &middot; &middot; , tNσ (t0 ,t) be the switching time points that occur over the
interval (t0 , t). For any t satisfying t ∈ [ti , ti+1 ) and i = {0, 1, &middot; &middot; &middot; , N }, we
obtain
x(t) = eAσ(ti ) (t−ti ) eAσ(ti−1 ) (ti −ti−1 ) eAσ(ti−2 ) (ti−1 −ti−2 ) &middot; &middot; &middot; eAσ(t0 ) (t1 −t0 ) x0
Using the inequalities (3.4) and (3.5) in addition to collecting the terms of
Hurwitz stable and unstable subsystems, we obtain:


Nσ (t0 ,t)+1
Y
u
s
||x(t)|| ≤ 
eαin  eλ Tu (t0 ,t)−λ Ts (t0 ,t) ||x0 ||
n=1
α+αNσ (t0 ,t) +λu Tu (t0 ,t)−λs Ts (t0 ,t)
||x(t)|| ≤ e
||x0 ||
αNσ (t0 ,t) +λu Tu (t0 ,t)−λs Ts (t0 ,t)
||x(t)|| ≤ Ce
||x0 ||
(3.8)
where α = max αin and C = eα . Now, if the switching law (3.7) holds, which
n∈i
can be written as
λu Tu (t0 , t) − λs Ts (t0 , t) ≤ −λ∗ (Tu + Ts ) = −λ∗ (t − t0 )
(3.9)
Thus, we obtain from (3.8) that
∗
||x(t)|| ≤ CeαNσ (t0 ,t) −λ
(t−t0 )
||x0 ||
(3.10)
Since λ∗ &gt; λ (i.e, λ∗ ∈ (λ, λs )), we have two cases here:
• if we set α ≤ 0, we obtain from (3.10)
∗
||x(t)|| ≤ e−λ
(t−t0 )
||x0 || ≤ e−λ (t−t0 ) ||x0 ||
(3.11)
that is satisfied when :
− λ∗ (t − t0 ) ≤ −λ (t − t0 )
117
(3.12)
Hence, according to Theorem (2.6.2), (3.11) implies that the switched
system is exponentially stable with degree λ for any average dwell time
and any chatter bound.
• When α &gt; 0, we obtain from (3.10),
αNσ (t0 , t) − λ∗ (t − t0 ) ≤ β − λ(t − t0 )
(3.13)
This is equivalent to
Nσ (t0 , t) ≤ N0 +
t − t0
τ∗
where τ ∗ = λ∗α−λ , N0 = αβ , λ∗ ≥ λ and λ &lt; λs , Since β and N0 can be
specified arbitrarily.
Note that,
∵ C = eα
∴ α = ln C
The scalars αi , λi are computed from (3.4), (3.5) using algebraic matrix
theory that based on using the eigenvalue decomposition A = P J P −1 and
triangular inequality for norms, where J is Jordan canonical form whose
diagonal entries are the eigenvalues of A and P has columns set of linear
independent eigenvectors of A.
Since, eAt = P eJt P −1 , we have
||eAt || ≤ ||P || ||P −1 || ||eJt ||
where α = ln(||P || ||P −1 ||) = λM (Pi ) and ||eJt || ≤ eλM ax (J) t .
Finally, we conclude that,
||x(t)|| ≤ Ceβ−λ(t−t0 ) ||x0 ||
Thus, the switched system is globally exponentially stable with stability degree λ and any average dwell time τa ≥ τa∗ and any chatter bound N0 .
118
3.2.2
Determine ADT Based on Positive Symmetric
Matrix Pi [21]
Assume that {A1 , &middot; &middot; &middot; , Ak },{Ak+1 , &middot; &middot; &middot; , AN } are unstable and stable Hurwitz
matrices, respectively.
Then, there exist a set of λ1 , &middot; &middot; &middot; λN such that both of Ai − λi I, (i ≤ k) and
Ai + λi I, (i &gt; k) are Hurwitz stable. Therefore, determining λi for (i &gt; k)
requires the eigenvalues of (Ai + λI) are Hurwitz. Similarly, determining λi
for (i ≤ k) requires the eigenvalues of (Ai − λI) are Hurwitz.
Also, there exist positive definite matrices Pi such that:
(Ai − λi I)T Pi + Pi (Ai − λi I) &lt; 0 (i ≤ k)
(Ai + λi I)T Pi + Pi (Ai + λi I) &lt; 0 (i &gt; k)
(3.14)
hold.
and bases on solution of (3.14), we propose a Lyapunov quadratic function
candidate of the form
V (t) = xT Pi x
(3.15)
that has the following properties:
(A) For each Vi and its derivative satisfies:
(
+2λi Vi (i ≤ k)
V̇i ≤
−2λi Vi (i &gt; k)
(3.16)
(B) There exist constant scalars α1 ≥ α2 &gt; 0 such that
α1 ||x||2 ≤ Vi (x) ≤ α2 ||x||2
∀ x ∈ Rn , ∀ i ∈ P
(3.17)
(C) There exists a constant scalar &micro; ≥ 1 such that
Vi ≤ &micro;Vj
∀ i, j ∈ P
(3.18)
where, α1 = min {λm (P1 ), λm (P2 )}, α2 = max {λM (P1 ), λM (P2 )} and &micro; =
α2
.
α1
Under the switching law (3.7), there is τa such that the switched system is
globally exponentially stable.
119
Proof. The first property (A) can be derived from (3.14),
∵ ẋ
∵ Vi
∵ V̇i
∴ V̇i
=
=
≤
=
=
=
(Ai + λi I) x
xT P i x
0
ẋT Pi x + xT (Pi x)
(Ai + λi I)T xT Pi x + xT Pi (Ai + λi I)x
xT [Ai Pi + P iAi ] x + 2λi xT Pi x ≤ 0
∂Vi
⇒
Ai x ≤ −2λi Vi
∂x
and for any t &gt; t0 , let t1 &lt; t2 &lt; &middot; &middot; &middot; &lt; tNσ (t0 ,t) denote the switching points
of σ(t) over the interval (t0 , t). Then, for any t ∈ [ti , ti+1 ] where (0 ≤ i ≤
Nσ(t0 ,t) ), λ+ = max λi and λ− = min λi , we have
i≤k
i&gt;k
(
−
e−2λ (t−ti ) Vi (i &gt; k)
V (t) ≤
+
e+2λ (t−ti ) Vi (i ≤ k)
(3.19)
By induction with help of (3.18) and through expanding inequalities (3.19)
along the interval from t to tk+1 . Similarity, spreading the right-hand side
along the interval tk to t0 , we obtain
h
i
−2λ− (t−ti )
−2λ− (ti −ti−1 )
−2λ− (tk+2 −tk+1 )
V (t) ≤ e
V (ti ) e
V (ti−1 ) &middot; &middot; &middot; e
V (tk+1 ) .
h
i
+
+
+
e+2λ (tk −tk−1 ) V (tk−1 ) e+2λ (tk−1 −tk−2 ) V (tk−2 ) &middot; &middot; &middot; e+2λ (t1 −t0 ) V (t0 )
≤ &micro;Nσ (t,t0 ) e−2λ
V (t) ≤ &micro;
− T − (t,t
0)
e+2λ
+ T + (t,t )
0
Nσ (t,t0 ) −2λ− T − (t,t0 )+2λ+ T + (t,t0 )
e
V (t0 )
V (t0 )
(3.20)
where T − and T + represent the total activation time of Hurwitz stable and
unstable subsystems during [t0 , t], respectively.
By using (3.17)
r
α1 ln &micro; Nσ (t,t0 )−λ− T − (t,t0 )+λ+ T + (t,t0 )
e 2
||x|| ≤
||x0 ||
α2
using (3.9), we obtain
r
||x|| ≤
α1 ln &micro; Nσ (t,t0 )−λ∗ (t−t0 )
||x0 ||
e 2
α2
120
and since the switching law (3.7) holds for some λ∗ ∈ (λ, λ− ), we have
r
α1 ln &micro; Nσ (t,t0 )−λ∗ (t−t0 )
||x|| ≤
e 2
||x0 ||
α2
r
α1 β−λ(t−t0 )
≤
e
||x0 ||
(3.21)
α2
let
ln &micro;
2
= a, then
aNσ (t, t0 ) − λ∗ (t − t0 ) ≤ β − λ(t − t0 )
which is equivalent to
Nσ (t0 , t) ≤ N0 +
t − t0
τ∗
&micro;
where N0 = ln2β&micro; and τ ∗ = 2(λln∗ −λ)
.
Hence, (3.21) implies that the switched system (3.3) is globally exponentially
stable with degree λ under the switching law (3.7).
Example 3.2.1. Consider we have
−9 10
A1 =
,
−10 11
−20 10
A2 =
10 −20
where A1 is unstable while A2 is Hurwitz stable. We have λ(A1 ) = {1, 1}
and λ(A2 ) = {−10, −30}. We choose λ+ = 10 such that A1 − λI Hurwitz
stable, λ− = 9 such that A2 + λI is Hurwitz stable and solving LMI (3.14),
we obtain
−55.5 −50
0.0333 0.0167
1 0
P1 =
, P2 =
, Q=
−50 −45
0.0167 0.0333
0 1
thus, α1 = 0.4, α2 = 0.6 and &micro; = 1.5. We choose λ = 1, λ∗ = 6 arbitrarily.
Then, according to switching law (3.7), we obtain
T−
λ+ − λ∗
16
≥
=
+
−
∗
T
λ −λ
3
&micro;
and the dwell time will be τd ≥ τ ∗ = 2(λln∗ −λ)
Hence, we need to activate
the stable system for more than 0.04 (let say τ = 0.08) and the activate the
unstable system less than 0.01 where,
3T −
3 ∗ 0.08
≥ T+ ⇒
= 0.015 ≥ T +
16
16
The results shown on Tables 3.1 and 3.2.
121
MATLAB.m 7 (Example 3.2.1).
clear
clc
A1=[−9 10 ; −10 1 1 ]
A2=[−20 1 0 ; 1 0 −20]
Q=[1 0 ; 0 1 ]
P1=l y a p (A1 ,Q)
P2=l y a p (A2 ,Q)
syms x ( t ) y ( t )
syms x ( t ) y ( t )
Z1=[x ( t ) ; y ( t ) ]
I =[1 0 ; 0 1 ]
SW1=(A1−(10∗ I ) ) ∗ Z1
SW2=(A2+(9∗ I ) ) ∗ Z1
syms x ( t ) y ( t )
[ x1 x2 ]= d s o l v e ( d i f f ( x)== SW1( 1 ) , d i f f ( y)==SW1( 2 ) , x(0)==−10,y(0)==−3
[ x3 x4 ]= d s o l v e ( d i f f ( x)== SW2( 1 ) , d i f f ( y)==SW2( 2 ) , x(0)==−10,y(0)==−3
t =0:0.001:1;
dimB=z e r o s ( l e n g t h ( t ) , 9 ) ;
dimB ( : , 7 ) = 1 ;
dimB ( : , 1 ) = t ;
dimB ( : , 2 ) = e v a l ( x1 ) ;
dimB ( : , 3 ) = e v a l ( x2 ) ;
figure
hold on
xlabel ( ’ t ’)
y l a b e l ( ’ x1&amp;x2 ’ )
t i t l e ( ’ t r a j e c t o r i e s s o l u t i o n o f A1 : t
hold on
p l o t ( dimB ( : , 1 ) , dimB ( : , 2 ) , ’ r ’ )
hold on
p l o t ( dimB ( : , 1 ) , dimB ( : , 3 ) , ’ g ’ )
hold on
figure
x l a b e l ( ’ x1 ’ )
122
V. S ( x1−x2 ) ’ )
y l a b e l ( ’ x2 ’ )
t i t l e ( ’ P l o t o f x1−x2 o f system A1 ’ )
p l o t ( dimB ( : , 2 ) , dimB ( : , 3 ) , ’ b ’ )
h o l d on
syms s1 s2 s s 1 s s 2
X=[ s 1 ; s2 ]
V1=t r a n s p o s e (X)∗ P1∗X
Q=[1 0 ; 0 1 ]
V1dot=( t r a n s p o s e (X) ∗ (
t r a n s p o s e (A1−(10∗Q) ) ∗ P1+P1 ∗(A1−(10∗Q) )
)∗X)
s 1=dimB ( : , 2 ) ;
s 2=dimB ( : , 3 ) ;
dimB ( : , 4 ) = e v a l (V1 ) ;
dimB ( : , 5 ) = e v a l (V1)
dimB ( : , 6 ) = e v a l ( V1dot ) ;
h o l d on
figure
xlabel ( ’ t ’)
y l a b e l ( ’ V1 ’ )
t i t l e ( ’ P l o t o f t , V1 system A1 ’ )
h o l d on
p l o t ( dimB ( : , 1 ) , dimB ( : , 4 ) , ’ g ’ )
h o l d on
figure
h o l d on
x d o t 1 2 =(A1−(10∗ I ) ) ∗X
p l o t x 1=e v a l ( x d o t 1 2 ( 1 ) ) ;
p l o t y 1=e v a l ( x d o t 1 2 ( 2 ) ) ;
h o l d on
plot ( plotx1 , ploty1 , ’ r ’ )
h o l d on
%−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
t =0:0.01:1;
dimC=z e r o s ( l e n g t h ( t ) , 6 ) ;
dimC ( : , 1 ) = t ;
123
dimC ( : , 2 ) = e v a l ( x3 ) ;
dimC ( : , 3 ) = e v a l ( x4 ) ;
hold on
xlabel ( ’ t ’)
y l a b e l ( ’ x2−x1 ’ )
t i t l e ( ’ P l o t o f t , x1−x2 system A2 ’ )
p l o t ( dimC ( : , 1 ) , dimC ( : , 2 ) , ’ : g ’ )
hold on
p l o t ( dimC ( : , 1 ) , dimC ( : , 3 ) , ’ : b ’ )
hold on
hold on
hold on
x l a b e l ( ’ x1 ’ )
y l a b e l ( ’ x2 ’ )
t i t l e ( ’ P l o t o f x1−x2 system A2 ’ )
p l o t ( dimC ( : , 2 ) , dimC(: ,3) , ’ − − y ’ )
hold on
syms s3 s4
X=[ s3 ; s4 ]
V2=( t r a n s p o s e (X)∗ P2 )∗X
s3=dimC ( : , 2 ) ;
s4=dimC ( : , 3 ) ;
dimC ( : , 4 ) = e v a l (V2 ) ;
hold on
hold on
xlabel ( ’ t ’)
y l a b e l ( ’ V2 ’ )
t i t l e ( ’ P l o t o f t , V2
system A2 ’ )
hold on
p l o t ( dimC ( : , 1 ) , dimC(: ,4) , ’ − − r ’ )
hold on
Q=[1 0 ; 0 1 ]
V2dot=t r a n s p o s e (X) ∗ ( t r a n s p o s e (A2+(9∗Q) ) ∗ P2+P2 ∗(A2+(9∗Q) ) ) ∗X
dimC ( : , 5 ) = e v a l (V2)
dimC ( : , 6 ) = e v a l ( V2dot ) ;
hold on
figure
124
xlabel ( ’ t ’)
y l a b e l ( ’ V2 ’ )
t i t l e ( ’ P l o t o f t , V2 system A2 ’ )
h o l d on
p l o t ( dimC ( : , 1 ) , dimC ( : , 4 ) , ’ − r ’ )
h o l d on
figure
x d o t 3 4 =(A2−(9∗ I ) ) ∗X
p l o t x 2=e v a l ( x d o t 3 4 ( 1 ) ) ;
p l o t y 2=e v a l ( x d o t 3 4 ( 2 ) ) ;
h o l d on
plot ( plotx2 , ploty2 , ’ : b ’ )
h o l d on
Figure 3.1: Results show Lyapunov functions of A1 decreasing since V̇i (x) &lt; 0
satisfied
125
Figure 3.2: Results show Lyapunov functions of A2 decreasing since V̇i (x) &lt; 0
satisfied
Example 3.2.2.
Consider the switched system (3.3) with
2 2
A1 =
1 3
−2 1
A2 =
1 −2
The eigenvalues of both matrices are λ(A1 ) = 1, 4 and λ(A2 ) = −1, −3 which
indicate that A1 is unstable whereas A2 is Hurwitz stable.Thus, we have λu =
4 and λs = 1. Using algebraic matrix theory, we can compute the scalars
λi , αi in (3.4) and (3.5) (we can solve them based on technique described in
the Appendix ) from
P1−1 A1 P1
P2−1 A2 P2
1 0
=
0 4
−1 0
=
0 −3
126
2 1
, P1 =
−1 1
1 −1
, P2 =
1 1
Thus, we choose α1 = 0, α2 = 0.6 according to,
Jt
||P2 e
P2−1 ||
−t
1
1 −1
e
0
2
=
1 1
0 e−3t
− 21
−t
1
e + e−3t e−t − e−3t
=
e−t − e−3t e−t + e−3t
2
1
2
1
2
= ||P2 || ||P2−1 || eλM (J) t ≤ eα2 −λ2 t
||P1 e
Jt
P1−1 ||
t
1
1
2 1
e 0
−
3
3
=
1
2
−1 1
0 e4t
3
3
t
1
2e + e4t −2et + 2e4t
=
−et + e4t
et + 2e4t
3
= ||P1 || ||P1−1 || eλM (J) t ≤ eα1 +λ1 t
if we choose t = 0.0001s, λ = 0.25 &lt; λ− and λ∗ = 0.5 ∈ (λ, λ− ), we get
0.33 ≤ α.
We choose α1 = 0 and α2 = 0.6. Then, the switching law will require
u −λ∗
Ts (t,t0 )
α
∗
≥ λλs −λ
∗ = 9 and the ADT is computed as τa ≥ τa = λ∗ −λ = 2.4.
Tu (t,t0 )
If we choose τa ≈ 4.5 and then we activate the stable Hurwitz system with
time period T s = 4.5, then we need to activate the other subsystem with
T u = 0.5. The trajectory of the switched system is shown in Figure 3.3,
where the initial state is x0 = [10 20]T .
127
Figure 3.3: trajectory resulting from switching between A1 and A2 (Example
3.2.2)
3.3
Stability of Switched Systems When All
Subsystems Consist of Stable and Unstable Subsystems
Consider the switched system
ẋ = Ai x(t) , f or i = 1, 2
(3.22)
Assume that T &gt; 0 is a constant, where T is the estimated time for each
system to be activate, then:
• If subsystems A1 , A2 are activated one by one with the same duration
T , then the entire system is exponentially stable.
• If subsystem A1 , A2 are activated one by one with duration T and
2T , respectively, then the entire system is uniformly stable, but not
asymptotically stable.
• if subsystems A1 , A2 are activated one by one with T and 3T , respectively, the the entire system is unstable.
128
So, stability of this kind of switched system requires the switching law to be
specified.
3.3.1
Stability When One Subsystem is Stable and the
Other One is Unstable
Theorem 3.3.1. [21] Consider the two matrices Au1 , As2 are Hurwitz and
non-Hurwitz, respectively. Assume that there exist β1 , β2 &gt; 0 and α &gt; 0
such that
u
||eA1 t || ≤ α e−β1 t
s
||eA2 t || ≤ α e+β2 t
, (U nstable Subsystem)
, (Stable Subsystem)
Let T u denotes the total time period that unstable subsystem {Au1 } is activated. Similarly, Let T s denotes the total time period that unstable subsystem
{As2 } is activated and under the hypotheses of Theorem (3.2.1) satisfied, if
β2
Ts
&gt;
u
T
β1
then the switched system is asymptotically stable under any switching signal.
Proof. Let t0 , t1 , &middot; &middot; &middot; , tn be the the switching times for each switched subsystem over the interval (t0 , t).
Consider for the solution of (3.22), we obtain
x(t) = eA2 (tn −tn−1 ) eA1 (tn−1 −tn−2 ) &middot; &middot; &middot; , eA2 (t2 −t1 ) eA1 (t1 −t0 ) ||x) ||
||x(t)|| ≤ e−β2 (tn −tn−1 ) eβ1 (tn−1 −tn−2 ) &middot; &middot; &middot; , e−β2 (t2 −t1 ) eβ1 (t1 −t0 ) ||x0 ||
= e−β2 (tn −tn−1 +&middot;&middot;&middot;+t1 −t0 ) eβ1 (tn−1 −tn−2 +&middot;&middot;&middot;+t1 −t0 ) ||x0 ||
s
u
= e−β2 nT eβ1 nT ||x0 ||
To satisfy the asymptotic stability, we have to set −β2 nT s + β1 nT u &lt; 0.
Thus, as n go to infinity , ||x(t)|| goes to origin if and only if
β1
Ts
≤
Tu
β2
129
3.3.2
Stability When a Set of Subsystems are Stable
and the Other Sets are Unstable
Definition 3.3.1. Since the switched system consist of both stable and unstable subsystems. Therefore, the matrices Ai can be separated into Ai − and
Ai + . Thus,
{Ai : i = 1, 2, &middot; &middot; &middot; , n} = Ai − : i = 1, 2, &middot; &middot; &middot; , r ∪ Ai + : i = r + 1, &middot; &middot; &middot; , n
where 1 ≤ r ≤ n − 1, the Ai − are Hurwitz stable, while the Ai + are unstable.
Moreover,assume that there exist αi , β1 , β2 &gt; 0 such that
−
||eAi t || ≤ αi e−β1 t , i = 1, 2, &middot; &middot; &middot; , r
(3.23)
+t
||eAi || ≤ αi eβ2 t , i = r + 1, &middot; &middot; &middot; , n
(3.24)
Let T − (τ, t) denotes the total time period that stable subsystems from A−
i
are activated. Similarly, Let T + (τ, t) denotes the total time period that unstable subsystems from A+
are activated. Now, assume the switching law
i
guarantees the following inequality,
inf
t≥t0
β2
T − (t, t0 )
=q&gt;
+
T (t, t0 )
β1
holds.
and for any given initial time t0 .
Theorem 3.3.2. Assume that there exist αi , β1 , β2 &gt; 0 such that the conditions
−
||eAi t || ≤ αi e−β1 t , i = 1, 2, &middot; &middot; &middot; , r
+
||eAi t || ≤ αi eβ2 t , i = r + 1, r + 2, &middot; &middot; &middot; , n
hold.
If hypothesis (i.e, assumption 2 of Theorem (3.2.1)) is true, then for switched
system (3.1) with initial condition (t0 , x0 ), and under any switching law satisfying hypothesis (assumption three), and assume that :
T − (t, t0 )
β2
=q&gt;
+
T (t, t0 )
β1
(3.25)
||x(t)|| ≤ K0 e−β(t−t0 )
(3.26)
inf
t≥t0
then,
130
where,
β=
β1 q − β2
1+q
Proof.
Notice that under assumption (3.25), it is true that
β2
−
+
T − (t, t0 )
−β1 T (t, t0 ) + β2 T (t, t0 ) ≤ − β1 −
q
β2
q
≤ − β1 −
(t − t0 )
q q+1
= −β(t − t0 )
Thus, inequality (3.26) can be derived directly from (2.52).

Zt1
||x(t)|| ≤ K0  e−β1 (t−t0 ) eβ2 (t−t0 ) ||x0 || + e−β1 (t−τ ) eβ2 (t−τ ) γ||x(τ )|| dτ + &middot; &middot; &middot;
t0
Ztk
+
e−β1 (t−τ ) e−β2 (t−τ ) γ||x(τ )|| dτ +
tk−1
Zt

e−β1 (t−τ ) eβ2 (t−τ ) γ||x(τ )|| dτ 
tk
Using T − ,T + notations:

||x(t)|| ≤ K0 ||x0 || e−β1
T − (t,t
0)
eβ2
T + (t,t
0)
Zt1
+
e−β1 T
− (t,τ )
eβ2 T
+ (t,τ )
γ||x(τ )|| dτ + &middot; &middot; &middot;
t0
Ztk
+
e−β1 T
− (t,τ )
eβ2 T
+ (t,τ )
Zt
γ||x(τ )|| dτ +
tk−1

e−β1 T
− (t,τ )
eβ2 T
+ (t,τ )
γ||x(τ )|| dτ 
tk
Grouping the elapsed time interval for each subsystem


Zt
−
+
−
+
||x(t)|| ≤ K0 ||x0 || e−β1 T (t,t0 ) eβ2 T (t,t0 ) + e−β1 T (t,τ ) eβ2 T (t,τ ) γ||x(τ ||) dτ 
t0
replacing T (t, t0 ) with (t − t0 ) then applying Gronwall inequality, yields
||x(t)|| ≤ K0 ||x0 || e−β1 T
− (t,t )+β T + (t,t )+γK (t−t )
0
2
0
0
0
131
Thus,
||x(t)|| ≤ K0 ||x0 || e−β1 T
− (t,t )+β T + (t,t )
0
2
0
Hence, for stability it is required that
−β1 T − (t, t0 ) + β2 T + (t, t0 ) &lt; 0
T − (t, t0 )
β2
⇒ +
&gt;
T (t, t0 )
β1
3.4
Stability of Switched Systems When All
Subsystems Unstable via Dwell-Time Switching
When all subsystems are unstable; i.e., V̇i (t) &lt; κ Vi (t), κ &gt; 0, i ∈ P, then
the increment of the Lyapunov function is compensated by switching law
that forces the Lyapunov function decreases at switching instant t.
Theorem 3.4.1. [22] Consider the switched system
ẋ = fσ(t) (x(t))
(3.27)
Assume there is a switching sequence T = {t0 , t1 , &middot; &middot; &middot; , tn } generated by σ(t) :
[0, ∞) 7→ {1, 2, &middot; &middot; &middot; , N } = P, where N is the number of modes. If there
exists a set of positive definite functions Vi (t, x) : R &times; Rn 7→ R satisfying the
following properties:
α(||x||) ≤ Vi (t, x) ≤ β(||x||), where α, β ∈ K∞
V̇i ≤ κVi (t), κ &gt; 0
(3.28)
(3.29)
and there exists a constant 0 &lt; &micro; &lt; 1 such that
+
−
−
Vj (t+
n ) ≤ &micro;Vi (tn ), tn 6= tn , ∀i 6= j , i, j ∈ P
ln &micro; + κτn &lt; 0, n = 0, 1, 2, &middot; &middot; &middot;
(3.30)
(3.31)
+
where Vi (t−
n ) = lim− Vi (t),Vi (tn ) = lim+ Vi (t) and τn = tn+1 − tn is the dwell
t7→tn
t7→tn
time for n = 0, 1, 2, &middot; &middot; &middot; , which denotes the length between successive switching
instants, then the switched system (3.27) is GUAS with respect to switching
law σ(t).
132
Proof. When t ∈ [tn , tn+1 ) and from inequality (3.29), we obtain
V (t) ≤ eκ(t−tn ) V (tn ) ⇒ V (tn+1 )) ≤ eκ(tn+1 −tn ) V (tn )
(3.32)
Suppose the switched system switches from i 7→ j at switching instant tn+1 ,
thus,
Vj (tn+1 ) ≤ &micro;Vi (tn ), i 6= j, ∀ i, j ∈ T
Thus, we can rewrite inequality (3.32) as
V (tn+1 ) ≤ &micro; eκ(tn+1 −tn ) V (tn )
(3.33)
V (tn+1 ) ≤ ρV (tn )
(3.34)
Thus,
where ρ = &micro; eκ τn . It follows from inequality (3.30) that for ρ &lt; 1, we obtain
ρ = &micro; eκ τn &lt; 1 ⇒ ln &micro; + κ τn &lt; 0
By establishing inequality (3.31) which can be written as
&micro; &lt; e−κτn
there exists a τmax = supn=0,1,2,&middot;&middot;&middot; τn . So, to conclude that the switched system
(3.27) is GUAS with respect to switching law σ(t), we can choose
δ() = β −1 (e−κ τmax α())
such that
||x(t0 )|| &lt; δ() ⇒ ||x(t0 )|| &lt; β −1 (e−κ τmax α())
(3.35)
Thus, we can write the above inequality as:
β(||x(t0 )||) &lt; e−κ τmax α()
(3.36)
From inequality (3.28), we obtain
V (t0 ) ≤ β(||x0 ||)
(3.37)
Combination of inequalities (3.36) and (3.37) yield
V (t0 ) &lt; β(||x0 ||) &lt; e−κ τmax α()
133
(3.38)
Since V (tn ) is strictly decreasing; i.e,
V (tn+1 ) &lt; .. &lt; V (t) &lt; .. &lt; V (tn ) &lt; .. &lt; V (t0 )
which based on (3.34). Then, we have
V (tn ) &lt; e−κτmax α()
(3.39)
If, one has V (t) ≤ eκ τmax V (tn ) over t ∈ [tn , tn+1 ), then under this act, we
obtain
V (t) &lt; α() ⇒ α−1 (V (t)) &lt; Furthermore,
α(||x(t)||) ≤ V (t) ⇒ ||x(t)|| ≤ α−1 (V (t)) ⇒ ||x(t)|| ≤ We conclude that
∀ &gt; 0, ∃ δ() &gt; 0 such that ||x(t)|| &lt; , whenever ||x(t0 )|| &lt; δ
which means the system (3.27) is GUS. Also, due to the fact that the sequence
V (tn ) ∀ n = 0, 1, 2, &middot; &middot; &middot; is strictly decreasing, we have lim ||x(t)|| = 0.
t→∞
Hence, the switched system (3.27) is GUAS under the switching law σ(t).
3.4.1
Stabilization for Switched Linear System
Consider the switched system
ẋ(t) = Aσ(t) x(t)
(3.40)
with all subsystems unstable. The stability of switched system (3.40) focuses
on computation of minimal and maximal admissible dwell time {τmin , τmax }.
If the switching law stabilizes switch systems (3.40) then, the dwell time τ
should be confined by pairs of lower and upper bounds by τd ∈ [τmin , τmax ],
which means that the activation time of each mode requires to be neither
too long nor too small, and confined by [τmin , τmax ], to ensure the switched
system (3.40) is the GUAS. When unstable subsystems are involved, then
Lyapunov functions Vi (t) are allowed to increase with bounded increase rate
described by V̇i (t) &lt; κVi (t).
Since no stable subsystem exists to be activated to compensate the increment
of Vi (t), the idea in stabilizing the switched system is to compensating the
134
increment of the Lyapunov function by switching behaviors such that the
Lyapunov functions decreasing at switching instant t1 and finally converge
to zero.
Roughly speaking, stability of switched systems (3.40) is related to the pair
of {tmin , tmax } which means the activate time of each mode is required to be
nether to long nor too small and confined in the section [τmin , τmax ]
Note that if we construct a Lyapunov function in the form of
Vi (t) = xT Pi x(t)
then inequality (3.30) turns into
Pj ≤ &micro;Pi , i 6= j
which will be never satisfied. Hence, we propose a Lyapunov function of the
form
Vi (t) = xT (t)Pi (t)x(t)
where Pi (t) is time dependent positive definite matrix. Then, inequality
(3.30) can be expressed as
−
Pj (t+
n ) ≤ &micro;Pi (tn )
(3.41)
which, in practice, is not easy task to check for existence of such a matrix
function Pi that satisfy the inequality. Therefore, we resort to a technique
called discretized Lyapunov function to solve this kind of problem.
Definition 3.4.1. [23][Lagrange Linear Interpolation] Lagrange linear interpolation is interpolation by the straight line through two arbitrary points
(x1 , f1 ) and (x2 , f2 ) as depicted in Figure 3.4. Thus, the linear polynomial
p1 (x) over [x1 , x2 ] is given as
p1 (x) =
x − x1
x − x2
f1 (x1 ) +
f2 (x2 )
x1 − x2
x2 − x1
135
(3.42)
Figure 3.4: Linear Interpolation
3.4.2
Discretized Lyapunov Function Technique
The technique is based on dividing the domain of the defined matrix function
Pi (t) into finite discrete points over the interval [tn , tn+1 ]. Thus, dividing the
interval into L segments. Each segment is described by the sub-interval
Tn,q = [t̄n , t̄n+1 ), q = {0, 1, 2, &middot; &middot; &middot; , (L − 1)}, n = 0, 1, &middot; &middot; &middot; and with equal
, t̄n = tn + qh and t̄n+1 = tn + (q + 1)h.
length h, where h = τmin
L
Now, assume that Pi (t), t ∈ [t̄n , t̄n+1 ) is chosen to be linear continuous matrix
function that describing a linear straight curve between two points t̄n and
t̄n+1 over the segment Tn,q . Let consider Pi,q describing the linear curve
between t̄n and t̄n+1 . Also, let us consider Pi any point on this linear curve
over the interval [t̄n , t̄n+1 ). Let Pi,q describing the exact line between the two
points Pi (t̄n ) and Pi (t̄n+1 ) and resort to any linear interpolation formula like
Lagrange interpolation(refer to Definition (3.4.1)) .
Thus, we can declare the definition of Pi,q as
Pi,q (t) =
t − t̄n
t − t̄n+1
Pi (t̄n ) +
Pi (t̄n+1 )
t̄n − t̄n+1
t̄n+1 − t̄n
(3.43)
where t̄n = tn + qh, t̄n+1 = tn + (q + 1)h, q = 0, 1, &middot; &middot; &middot; , (L − 1), L =
number of segments, and the corresponding discretized Lyapunov function
is
Vi (t) = xT (t)Pi,q x(t), t ∈ Tn,q = [t̄n , t̄n+1 ]
(3.44)
136
Moreover, on the interval [tn + τmin , tn+1 ), we set Pi (t) = Pi,L , where q = L.
Thus, the corresponding discretized Lyapunov function is
Vi (t) = xT (t)Pi,L x(t), t ∈ [tn + τmin , tn+1 )
(3.45)
Theorem 3.4.2. Consider the switched linear system
ẋ = Aσ(t) x(t)
(3.46)
For Ai are unstable matrices, there exists a sufficiently large scalar κ∗ &gt; 0
∗
such that matrices Ai − κ2 I are Hurwitz stable. Also, for a given scalars
κ &gt; κ∗ &gt; 0, 1 &gt; &micro; &gt; 0 and 0 &lt; τmin ≤ τmax , if there exists a set of matrices
Pi,q &gt; 0, q = {0, 1, &middot; &middot; &middot; , L − 1, L} and i ∈ P such that
ATi Pi,q + Pi,q Ai + h1 (Pi,q+1 − Pi,q ) − κ Pi,q
ATi Pi,q+1 + Pi,q+1 Ai + h1 (Pi,q+1 − Pi,q ) − κ Pi,q+1
ATi Pi,L + Pi,L Ai − κ Pi,L
Pj,0 − &micro; Pi,L
ln &micro; + κ τmax
&lt;0
&lt;0
&lt;0
≤ 0, i 6= j
&lt;0
(3.47)
(3.48)
(3.49)
(3.50)
(3.51)
. Then, the switched system (3.46) is GUAS under any switchwhere h = τmin
L
ing law. Moreover, computing of τmin , τmax are provided as
∗
τmax &lt; τmax
=
∗
τmin &gt; τmin
=
max
τmax &gt;τmin
min
τmin &lt;τmax
{τmax : inequality (3.51) holds}
(3.52)
{τmin : inequality(3.47) to (3.50) hold} (3.53)
Thus, the admissible dwell time is bounded in the section [τmin , τmax ].
∗
Proof. Since Ai − κ2 I are Hurwitz stable, there must exist a scalar κ ≥ κ∗ &gt;
0 satisfying inequality (3.49). Hence, there exists a discretized Lyapunov
function candidate for the ith subsystem:
Vi (t) = xT (t) Pi (t) x(t) ⇒ Vi (t) = xT Pi,q (t) x(t)
By applying discretization and using (3.44), we obtain
V̇i = xT Ṗi (t)x + 2ẋT Pi (t)x
and over each descretized segment Tn,q = [t̄n , t̄n+1 ], we obtain
137
• ∀ t ∈ Tn,q
where h =
τmin
L
Ṗi (t) = Ṗi,q
Pi,q+1 − Pi,q
=
τ̄n+1 − τ&macr;n
Pi,q+1 − Pi,q
=
(τn + (q + 1)h) − (τn + qh)
Pi,q+1 − Pi,q
=
h
and
(3.54)
2ẋT Pi (t) x = 2ẋT Pi,q (t) x
= xT ATi Pi,q + Pi,q A x
(3.58)
(3.59)
(3.55)
(3.56)
(3.57)
Since,
V̇i &lt; κVi ⇒
h
i
T
P
−P
x Ai Pi,q + Pi,q A x − κxT Pi,q x + xT i,q+1h i,q x &lt; 0 (3.60)
T
• Since Pi (t) = Pi,L when q = L,∀ t ∈ [t̄n , t̄n+1 ], we have
V̇i (t) = xT Pi,L x
Hence,
V̇i (t) = xT ATi Pi,L + Pi,L Ai x
Since,
V̇i − κVi &lt; 0 ⇒
we obtain,
ATi Pi,L + Pi,L Ai − κ Pi,L &lt; 0
(3.61)
thus, inequalities (3.60) and (3.61) imply that inequality (3.29) is satisfied.
Furthermore, from (3.41), we get
Pj &lt; &micro;Pi
let Pj = Pj,0 when q = 0
let Pi = Pi,L when q = L
⇒ Pj,0 − &micro;Pi,L &lt; 0
138
that is to say inequality (3.30) has been satisfied.
Also, since τmax ≥ τn , ∀ n ∈ {0, 1, 2, &middot; &middot; &middot; }, we say that inequality (3.31)
is satisfied.
Finally, according to Theorem (3.4.1), the switched system (3.46) is
GUAS and governed under any switching law σ(t) where t ∈ [τmin , τmax ].
Example 3.4.1. Consider the switched system (3.40) composed of two subsystems:
−1.9 0.6
0.1 − 0.9
A1 =
, A2 =
0.6 − 0.1
0.1 − 1.4
The eigenvalues of A1 are λ1 (A1 ) = 0.0817, λ2 (A1 ) = −2.0817 and eigenvalues of A2 are λ1 (A2 ) = 0.0374 and λ2 (A2 ) = −1.3374.
Since both subsystems have eigenvalues lie in the right half-plane, so the
switched system unstable.
Given x0 = [3 5]T and a periodic switching sequence σ satisfies t̄n+1 − t̄n =
2, n = 0, 1, 2, ... Therefore, we see that τmin = τmax = τn = 2, and if we
fix L = 1, &micro; = 0.5 and κ = 0.4 then, through inequalities (3.47)-(3.51), the
following convenient solution can be obtained :
7.3707 2.2116
45.2459 − 12.3623
P1,0 =
, P1,1 =
2.2116 4.3990
−12.3623 28.8193
P2,0
18.5432 − 6.2729
17.0657 0.9100
=
, P2,1 =
−6.2729 13.0147
0.9100 59.5486
As shown in Figure 3.5, the switched system can be stabilized by the switching signal σ(t) concluded in the set of all switching signals with dwell time
τn ∈ [τmin = 2, τmax = 2], ∀ n = 0, 1, 2, &middot; &middot; &middot; as illustrated in Figure 3.6 of the
corresponding Lyapunov function, the function may increase during the evolution time t ∈ [0, 2), but the increments are compensated by the switching
behaviors where the Lyapunov function decreases at switching instant t1 and
V1 (t−
1 ), which is finally converges to zero.
139
Figure 3.5: State trajectories of x(t)
Figure 3.6: State trajectories of x(t)
140
Chapter 4
Appendix
4.1
Matrices and Matrix Calculus
Every vector x can be expressed as
x = x1 i1 + x2 i2 + &middot; &middot; &middot; + xn in
Define n &times; n square matrix I as a basis for vector x:
I = i1 i2 &middot; in
Hence,


x1
 x2 
 
x = I  ..  = I x̄
.
xn
T
where x̄ = x1 x2 &middot; &middot; &middot; xn is called the representation of the vector x
w.r.t the basis I = i1 i2 &middot; in , where the following orthonormal basis are
defined as
 
 
 
1
0
0
0
1
0
 
 
 
i1 =  ..  , i2 =  ..  , &middot; &middot; &middot; , in =  .. 
.
.
.
0
0
1
141
Associating the representation of the vector x
have
 

x1
1
 x2 

0
 
x =  ..  = x1 i1 + x2 i2 + &middot; &middot; &middot; + xn in = 

0
.
0
xn
with respect to this basis, we
0
1
0
0
0
0
1
0
&middot;&middot;&middot;
&middot;&middot;&middot;
&middot;&middot;&middot;
&middot;&middot;&middot;
 

x1
0
 x2 

0
 
= In  .. 
0
.
1
xn
where In is the n &times; n unit matrix. So, the representation of any vector x
with respect to the orthonormal basis are equals.
T
Example 4.1.1. Consider the vector x = 1 3 in Rn . The two vectors
T
T
q1 = 3 1 and q2 = 2 2 are linearly independent and can be used as a
basis for vector x. Thus, we have
x = Q x̄
3 2 x1
=
1 2 x2
T
⇒ x̄ = −1 2
MATLAB.m 8.
Q=[3 2 ; 1 2 ]
x=[1 ; 3 ]
xBAR=i n v (Q)∗ x
Some Vector Properties in Rn
Vectors in Rn can be added or multiplied by a scalar. The inner product of
n
P
two vectors x and y is xT y =
xi y i .
i=1
4.2
Vector and Matrix Norms[4]
The norm ||x|| of a vector x is a real-valued function with the following
properties
142
• ||x|| ≥ 0 for all x ∈ Rn , with ||x|| = 0 if and only if x = 0.
• ||x + y|| ≤ ||x|| + ||y||
• ||αx|| = |α| ||x||
∀x, y ∈ Rn .
∀ α ∈ R , x ∈ Rn .
and the second property is called the triangle inequality. We shall consider
the class of p − norm, defines by
1
||x||p = (|x1 |p + |x2 |p + . . . + |xn |p ) p , 1 ≤ p &lt; ∞
and
||x||∞ = max |xi |
i
The three most commonly used norms are ||x||1 , ||x||∞ , and the Euclidean
norm
1
1
||x||2 = |x1 |2 + |x2 |2 + . . . + |xn |2 2 = xT x 2
All p-norms are equivalent in the sense that if k &middot; kα and k &middot; kβ are two different p-norms, then there exist positive constants c1 and c2 such that
c1 |x|α ≤ |x|β ≤ c2 |x|α
4.3
Matrices p-norms
Definition 4.3.1. [4] A matrix norm || &middot; || on Rn&times;m is a function || &middot; || :
Rn&times;m 7→ R that satisfies the following axioms:
• ||A|| ≥ 0 , A ∈ Rn&times;m .
• ||A|| = 0 , if A = 0.
• ||αA|| = |α|||A|| , where α ∈ R.
• ||A + B|| ≤ ||A|| + ||B|| , where A, B ∈ Rn&times;m .
For A ∈ Rn&times;m , the induced p-norm of A is defined by
||A||p =
&quot; n m
XX
# p1
|ai,j |p
i=1 j=1
143
,1 ≤ p ≤ ∞
which for p = 1, 2, ∞ is given by
||A||1 = max
j
m
X
|aij |
;
||A||2 = λmax
1
A A 2
T
;
||A||∞ = max
i=1
where λmax AT A is the maximum eigenvalue of AT A.
Example 4.3.1. Consider the matrix A,
A = a 10 2
The ||A|| is given by computing the eigenvalues of AT A:
λ − a2 7 − a
T
det(λI − A A) = det
−a
λ − b2 − 1
and the roots of the quadratic λ2 − (1 + a2 + b2 )λ + a2 b2 are:
p
1 + a2 + b2 &plusmn; (1 + a2 + b2 )2 − 4a2 b2
λ=
2
Thus, with the largest root we obtain,
p
p
(a + b)2 + 1 + (a − b)2 + 1
||A|| =
2
Definition 4.3.2. Let A be a square n &times; n matrix, the trace is the sum of
the diagonal elements
n
X
aii
trA =
i=1
Moreover,
tr A ≡ tr AT
Definition 4.3.3. [8] For any given square matrix A of order n (i.e n &times; n),
the determinant of A is denoted by det(A) or |A| and defined as
n
X
det A =
(−1)i+j aij Mij
j=1
where i choosed to be fixed, i ≤ n and Mij is the minor determinant of order
(n − 1).
144
i
n
X
j=1
|aij |
Definition 4.3.4. If A is n &times; n matrix and x is a vector of order n &times; 1 such
that
|A − λI| = 0
is called characteristic equation of the matrix A. Thus, the eigenvalues
of the matrix A are the roots of the characteristic equation. Thus, after
determining the eigenvalues λn ’s, we have to solve the homogeneous system
(A − λi I)x = 0
for each λi , i = 1, 2, &middot; &middot; &middot; , n and then obtaining the corresponding eigenvectors.
Theorem 4.3.1 (Cayley-Hamilton). [4]
It states that, if we have a square matrix A of order n &times; n then the characteristic polynomial of A is given by
det(A − λ I) = pn λn + pn−1 λn−1 + &middot; &middot; &middot; + p0 where pn = (−1)n
satisfies
pn An + pn−1 An−1 + &middot; &middot; &middot; + p1 A + p0 I = 0
Example 4.3.2. Consider the square 2 &times; 2 matrix
−1 2
A=
−10 8
the eigenvalues λ’s and the corresponding eigenvectors of matrix A are obtained by solving the characteristic equation and the homogeneous system
(A − λI)x = 0, respectively.
|A − λI| =
−1 − λ
2
=0
−10 8 − λ
Thus, λ2 − 7λ + 12 = 0
Hence, λ1 = 3 , λ2 = 4
The eigenvector corresponding to the eigenvalue λ1 obtained by,
−4 2 x1
2x − x2 = 0
(A − (3)I)x =
=0→ 1
−10 5 x2
or x1 = x22
Hence, the eigenvector x is given by
x2 1
x1
−2
2
x=
=
= x2
x2
x2
1
145
T
Setting x2 = 2, then we have the eigenvector as 1 2 .
and corresponding to the eigenvalue λ2 , we have
−5 2 x1
5x − 2x2 = 0
(A − (8)I)x =
=0→ 1
−10 4 x2
or x1 = 25 x2
Setting x2 = 5,then we have the eigenvector is given by,
T
T
x = x1 x2 = x2 2 5 .
Since,
λ2 − 7λ + 12 = 0
Thus, by Cayley-Hamilton Theorem ,we get
A2 − 7A + 12I = 0
Hence ,A2 = 7A − 12I.
−1 2
1 0
⇒A =7
− 12
−10 8
0 1
2
−7 14
−12 0
−19 14
⇒A =
+
=
−70 56
0 −12
−70 44
2
Definition 4.3.5. Assume that A has distinct eigenvalues, and P is a matrix that has columns corresponding to the eigenvectors for the matrix A,
then P −1 AP is a diagonal matrix, with the eigenvalues of A as the diagonal
entries.
Example 4.3.3. Based on Example (4.3.2), we have
1 2
2 5
−1 −1 2
−10 8
1 2
3 0
=
2 5
0 4
5
−2
1 2
where P =
and P −1 =
2 5
−2 1
4.4
Consider n &times; n matrix P and any n &times; 1 vector x. Both have real entries,
then the product xT P x is called a quadratic form in x.
146
Definition 4.4.1 (Rayleigh-Ritz Inequality[9]). State that for any n &times; n
matrix P and any n &times; 1 vector x, the inequality
λmin (P )xT x ≤ xT P x ≤ λmax (P )xT x
satisfied.
where λmin , λmax denote the smallest and largest eigenvalues of P .
Definition 4.4.2. We say that the matrix P is positive definite if and
only if all of its leading principle minors are all positive.
Example 4.4.1. Consider the symmetric matrix
a b
P =
c d
is positive definite if a &gt; 0 and ad − bc &gt; 0 and semidefinite if a ≥ 0 and
4.5
Matrix Calculus
Definition 4.5.1. [9]
• Differentiation of
d
A(t)
dt
and integration of
Rt
A(δ) dδ matrices should
0
be defined entry-by-entry as:
Zt
Zt
A(δ) dδ =
0
aij (δ)dδ
0
d
d
A(t) = aij (t)
dt
dt
• The product rule holds for differentiation of matrices such that
d
[A(t)B(t)] = Ȧ(t)B(t) + A(t)Ḃ(t)
dt
and
d 2
A (t) = Ȧ(t)A(t) + A(t)Ȧ(t)
dt
147
• In the case of matrix functions, we have
d
dt
Zt
A(δ)dδ = A(t) − A(0)
0
• Moreover, from the triangle inequality,
Zt
||
Zt
x(τ )dτ || ≤
t0
||x(τ )||dτ
t0
Definition 4.5.2 (Leibniz rule).
Consider that we have a matrix A depends on two variables t and δ, then
d
dt
Zg(t)
Zg(t)
∂
A(t, δ) dδ
A(t, δ)dδ = A(t, g(t)) ġ(t) − A(t, f (t)) f˙(t) +
∂t
f (t)
f (t)
is true
4.6
4.6.1
Topological Concepts in Rn
Convergence of Sequences
Definition 4.6.1. A sequence of n &times; 1 vectors x0 , x1 , . . . , xn in Rn , denoted
by {xn }∞
n=0 is said to be converging to a vector x̄ (and is called a limit vector
of the sequence), if
|xn − x̄| → 0 as n → ∞
which is equivalent to saying that, for any &gt; 0, there is N () such that
|x̄ − xn | &lt; , ∀ n ≥ N ()
and written as
lim xn = x̄
n→∞
148
Definition 4.6.2 ([9]). For an infinite series of vector functions, written as
∞
X
xi (t)
i=0
and for each xi (t) defined on the interval [t0 , t1 ], convergence is defined in
terms of the sequence of partial sum
sk (t) =
k
X
xi (t)
i=0
The series converges to the function x(t), if for each tq ∈ [t0 , t1 ], we have
lim ||x(tq ) − sk (tq )|| = 0
k→∞
and said to be converge-uniformly to x(t) on the interval [t0 , t1 ],if for a given
&gt; 0 there exists a positive integer K() such that for every t ∈ [t, t1 ],
||x(t) −
k
X
xi (t)|| &lt; , k &gt; K()
i=0
4.6.2
Sets
• Open Subsets:
A subset S ⊂ Rn is said to be open if, for every vector x ∈ S one can
find an − neighborhood of x such that
N (x, ) = {z ∈ Rn : ||z − x|| &lt; } ⊂ S
• Closed Subset:
A subset S is closed, if every convergent sequence {xn } with elements
in S converges to a point in S.
• bounded Set:
A set S is bounded, if there is r &gt; 0 such that ||x|| ≤ r ∀ x ∈ S.
• Compact Sets:
A set S is compact, if it is closed and bounded
149
• The interior of a set S:
The interior of a set S is S − ∂S, where ∂S is called the boundary of
S and contain the set of all boundary points of S.
• Connected Set:
An open set S is connected, if every pair of points in S can be joined
by an arc lying in S.
• Convex Set:
A set S is convex if, for every x, y ∈ Rn and every real number θ,
θ &lt; 1 , then the line segment joining x and y is
L(x, y) = {θx + (1 − θ) y}
and is included in the domain Rn .
4.7
0&lt;
Mean Value Theorem
Assume that f : Rn 7→ R is continuously differentiable at each point x. Let
x, y ∈ Rn such that the line segment L(x, y) ⊂ Rn . Then, there exists a
point z that is positioning on the middle of the line L(x, y) such that
f (y) − f (x) =
4.8
∂f
∂x
(y − x)
x=z
Supremum and Infimum Bounds
Definition 4.8.1. [9] The supremum( sup ) and infimum( inf ) defined
as
• The supremum of a nonempty set S ⊂ R of scalars, denoted by sup S,
is defined to be the smallest scalar x such that x ≥ y for all y ∈ S.
• The infimum of a nonempty set S ⊂ R of scalars,denoted by inf S, is
defined to be the largest scalar x such that x ≤ y for all y ∈ S.
Definition 4.8.2. [9] Let D ⊂ Rn and let f : D 7→ Rn . We say that f is
bounded, if there exists α &lt; 0 such that ||f (x)|| ≤ α for all x ∈ D.
Definition 4.8.3. In the case where
150
• f : Rn 7→ R is continuously differentiable,
∂f
∂f
∂f
0
= f (x) =
, &middot;&middot;&middot; ,
∈ R1&times;n
∂x
∂x1
∂xn
is called the gradient of f at x.
• for a continuously differentiable function f : Rm 7→ Rn ,
 ∂f

∂f1
∂f1
1
(x)
(x)
&middot;
&middot;
&middot;
(x)
∂x
∂x2
∂xm
 ∂f21
.. 
∂f2

∂f
(x) ∂x2 (x) &middot; &middot; &middot;
. 
0
n&times;m
∂x


1
= f (x) =  .
..
..  ∈ R
..
∂x
.
.
 .
.
. 
∂fn
∂fn
(x)
&middot;&middot;&middot;
&middot; &middot; &middot; ∂xm (x)
∂x1
is called the Jacobian of f at x
4.9
Jordan Form
Consider the linear dynamical system
ẋ = Ax(t) , t ≥ 0
(4.1)
that has a general solution of the form
x = eAt
where A is a real n &times; n matrix.
Definition 4.9.1. [25] We can transform the matrix A in Jordan canonical
form J, by using the transformation x = P y to obtain the equivalent system
ẏ = P −1 A P y = Jy
Thus,
J = P −1 AP → A = P J P −1
Hence, the solution of the dynamical system is given by,
x = eAt = P eJt P −1
where P is a real, nonsingular(invertible) constant n &times; n matrix
151
Definition 4.9.2. [25] Without loss of generality, the Jordan canonical form
J of the matrix A is given by


J0 0 0 &middot; &middot; &middot; 0
 0 J1 0 &middot; &middot; &middot; 0 


...


0  = diag J0 J1 &middot; &middot; &middot; Jm
J =  0 0 J2


 0 0 0 ... 0 
0 0 &middot; &middot; &middot; &middot; &middot; &middot; Jm
where
J0 = diag − block λ1 λ2 &middot; &middot; &middot;
λk
and
Ji = λk+i I + Ni ,
The structure of Ni takes the form,

0 1
0 0
. .

Ni =  .. ..

0 0
0 0
∀ i ∈ {1, 2, .., m}

0 &middot;&middot;&middot; 0
1 &middot; &middot; &middot; 0

. . . . .. 
. .
.

..
. 1
0
0 &middot; &middot; &middot; 0 k &times;k
i
i
where Ni has all zero entries except for 10 s above the diagonal and Ni denotes
the ki &times; ki Nilpotent matrix.
So, we have
 J0 t

e
0
0 &middot;&middot;&middot;
0
 0 eJ1 t 0 &middot; &middot; &middot;
0 


.


0 eJ2 t . .
0  = diag J0 J1 &middot; &middot; &middot; Jm
eJt =  0


..
 0
.
0
0
0 
0
0 &middot; &middot; &middot; &middot; &middot; &middot; eJm t
and
eJi t

1
0

= eλk t  .
 ..
t
1
..
.
0 0
1 2
t
2
t
..
.
&middot;&middot;&middot;
&middot;&middot;&middot;
..
.
0
&middot;&middot;&middot;
152
1
tki −1
(ki −1)!
1
tki −2 

(ki −2)!

..
.
1


equivalent form,
λk It Nk
e
e
=e
λk t
1 2 2
1
k−1 k−1
I + N t + N t + &middot;&middot;&middot; +
N
t
2!
(k − 1)!
we are concern on determining eAt = P eJ0 t P −1 when all eigenvalues of
A occur in the Jordan form only in J0 and not in any of the Jordan blocks
Ji for i = 1, 2..., m.
Definition 4.9.3. [3] There exists nonsingular, n &times; n matrix P such that
P −1 A P = J is so-called Jordan form and if the eigenvalues of matrix A are
distinct, then J is in a diagonal form with eigenvalues as diagonal elements;
that is,


λ1 0
0 ...
 0 λ2 0 &middot; &middot; &middot;


J = J0 = 

.
.
0 0
. &middot; &middot; &middot;
0 &middot; &middot; &middot; &middot; &middot; &middot; λn
Definition 4.9.4. If A is n &times; n and v is a nonzero n &times; 1 vector such that
for some scalar λ
Av = λ v
then v is an eigenvector corresponding to the eigenvalue λ. Also, there exists
an invertible n &times; n matrix P that has columns set of linear independent
eigenvectors for A such that P −1 AP is a diagonal matrix similar to A with
eigenvalues of A as the diagonal entries.
Definition 4.9.5. [9] For a square matrix A, there exists an invertible n &times; n
matrix P such that J = P −1 AP is a structure that has diagonal of n eigenvalues of A.
Moreover, if A has distinct eigenvalues, then P can be constructed from eigenvectors of A. Finally, since P is nonsingular(invertable),constant n &times; n matrix,then for every t, we have
eP
−1 AP t
= P −1 eAt P
Example 4.9.1. Consider the dynamical system
ẋ = A x(t) ,
153
where the A is a square matrix
0 −2
A=
2 −2
Since,
λ
2
det(A − λ I) = det
−2 λ + 2
√
Therefore, A has eigenvalues λ = −1 &plusmn; i 3.
√
√ T
The eigenvector of −1 + i 3 is 2 1 − i 3
√
√ T
Similarly, the eigenvector of −1 − i 3 is 2 1 + i 3
Then, the invertible matrix
2√
2√
P =
1−i 3 1+i 3
yields the diagonal form J:
J =P
−1
√
1+i 3
0√
AP =
0
1−i 3
Hence,
eAt = eP JP
4.10
−1 t
= P eJt P −1
The Weighted Logarithmic Matrix Norm
and Bounds of The Matrix Exponential[27]
Definition 4.10.1. For any real matrix A, we have the induced matrix norm
denoted by || &middot; || and the logarithmic matrix norm denoted by &micro;(A) defined by,
p
||A||2 =
λM AX (AT A)
(4.2)
A + AT
&micro;2 (A) = λM AX
(4.3)
2
where A = P J P −1 and λM AX stands for the maximal eigenvalue of a symmetric matrix.
154
Theorem 4.10.1. [27] For a real matrix A, we have
||eAt ||2 ≤ β e&micro;P [A] t
where P is given by Lyapunov equation, β =
q
λmax (P )
λmin (P )
(4.4)
and &micro;P [A] = − λmax1 (P )
Example 4.10.1. consider that we have a Hurwitz stable matrix


−0.8 0.4 0.2
−3 2 
A= 1
0
1 −1
and on solving Lyapunov equation,

5.0333
P = 3.0266
6.9401
we obtain

3.0266 6.9401
2.5477 5.4324 
5.3424 13.2528
we have &micro;2 (A) = 0.0359, the maximum real part of eigenvalues of A is
−0.0566, &micro;P [A] = −.0514 and β = 8.1966. According to Theorem (4.10.1),
we have
||eAt ||2 ≤ 9.1966 e−0.0514 t , ∀ t ≥ 0
Definition 4.10.2. [28] Consider matrix A that can be diagonalized such
that
P −1 A P = J = Diag(λ1 , &middot; &middot; &middot; , λm )
where λi are the eigenvalues of matrix A, then
||eAt || = ||P eJt P −1 || ≤ ||P || ||P −1 || eλM ax (J) t
4.11
Continuous and Differentiable Functions
Definition 4.11.1 (Continuous Functions). [1] A function f is mapping a
set S1 into a set S2 is denoted by f : S1 → S2 . A function f : Rn → Rm is
said to be continuous at a point x if, ∀ ε &gt; 0, ∃δ &gt; 0 such that
|x − y| &lt; δ ⇒ |f (x) − f (y)| &lt; 155
Definition 4.11.2 (Piecewise function). [1] A function f : R → Rn is said
to be piecewise continuous on an interval J ⊂ R, if for every bounded subinterval J0 ⊂ J, f is continuous for all x ∈ J0 except at finite number
of points where f may have discontinuities. In addition, at each point of
discontinuity x0 , the right-side limit lim f (x0 + h) and the left-side limit
h→0
limh→0 f (x0 − h) exist.
Definition 4.11.3 (Differentiable Functions). [1] A function f : R → R is
said to be differentiable at x if the limit
0
f (x) = lim
h→0
f (x + h) − f (x)
h
exists.
Definition 4.11.4. A function f : Rn → Rm is said to be continuously
∂fi
exist and are condifferentiable at a point x0 , if the partial derivative ∂x
j
tinuous at x0 , ∀ 1 ≤ i &lt; m, 1 ≤ j &lt; n.
For a continuous differentiable function f : Rn → R , the row vector
is defined by
∂f
∂x
∂f
∂f
∂f
=[
,&middot;&middot;&middot; ,
]
∂x
∂x1
∂xn
and the gradient vector, denoted by Of (x), is
T
∂f
Of (x) =
∂x
For continuously differentiable function f : Rn → Rm , the Jacobian
matrix [ ∂f
] is an m &times; n matrix whose element in the ith row and jth column
∂x
∂fi
is ∂xj .
4.12
Gronwall-Bellman Inequality
Lemma 4.12.1. [5] Let λ : [a, b] → R, be continuous and &micro; : [a, b] → R be
continuous and non-negative. If a continuous function y : [a, b] → R satisfies
Z t
y(t) ≤ λ(t) +
&micro;(s)y(s) ds ∀ t0 ≤ t
(4.5)
t0
156
then, on the same interval:
t
Z
Rt
y(t) ≤ λ(t) +
&micro;(s)λ(s)[e
s
&micro;(τ )dτ
] ds
t0
In particular,if λ(t) ≡ λ is a constant, then
Rt
y(t) ≤ λe
t0
&micro;(τ )dτ
Also, if &micro;(t) ≡ &micro; ≥ 0 is a constant, then
y(t) ≤ λe&micro;(t−t0 )
Proof. The proof is based on defining a new variable and transforming the
integral inequality into a differential equation, which can be easily solved.
From equation (4.5), let
t
Z
v(t) =
&micro;(s)y(s)ds
t0
Thus,
v̇ = &micro;(t)y(t)
(4.6)
and using equation (4.5) and Multiply both sides by &micro;(t), yields
Z
t
⇒ y(t) ≤ λ(t) +
&micro;(s)y(s) ds ∀ t0 ≤ t
t0
Z
t
⇒ &micro;(t)y(t) ≤ &micro;(t)λ(t) + &micro;(t)
&micro;(s)y(s)ds
t0
⇒ &micro;(t)y(t) ≤ &micro;(t)λ(t) + &micro;(t)v(t)
(4.7)
Then, using (4.6), (4.7) lead to
v̇ = &micro;(t)y(t) ≤ &micro;(t)λ(t) + &micro;(t)v(t)
(4.8)
Let us now define a new function that takes the same form of v̇ = &micro;(t)y(t)
with some extra terms:
ζ(t) = &micro;(t)y(t) − [&micro;(t)λ(t) + &micro;(t)v(t)]
157
which is absolutely a non-positive function under the consideration of
equation (4.8)
Thus,
⇒ ζ(t) = v̇(t) − &micro;(t)λ(t) − &micro;(t)v(t)
⇒ v̇(t) − &micro;(t)v(t) = ζ(t) + &micro;(t)λ(t)
(4.9)
Solving this equation (4.9) with initial condition v(a) = 0, yields
Z
t
v(t) =
[&micro;(τ )λ(τ ) + ζ(τ )]e
Rt
&micro;(r)dr
τ
dτ
t)
Since ζ(t) is non-positive function. Therefore,
Z
t
v(t) ≤
Rt
[&micro;(τ )λ(τ )]e
τ
&micro;(r)dr
dτ
t0
Using the definition of v(t) =
Z
Rt
&micro;(s)y(s)ds,
t0
t
t
Z
&micro;(s)y(s)ds ≤
Rt
[&micro;(τ )λ(τ )]e
t0
τ
&micro;(r)dr
dτ
(4.10)
t0
and rearrange inequality (4.5) into
Z
t
y(t) − λ(t) ≤
&micro;(s)y(s)ds
(4.11)
t0
Thus, combining both (1.17)-(1-18), yields
Z
t
y(t) − λ(t) ≤
[&micro;(τ )λ(τ )]e
Rt
τ
&micro;(r)dr
dτ
t0
Z
t
⇒ y(t) ≤ λ(t) +
[&micro;(τ )λ(τ )]e
t0
158
Rt
τ
&micro;(r)dr
dτ
4.13
Solutions of the State Equations
4.13.1
Solutions of Linear Time-Invariant(LTI) State
Equations
Theorem 4.13.1. [10] Consider the LTI state equation
ẋ(t) = Ax(t) + Bu(t)
ẏ(t) = Cx(t) + Du(t)
(4.12)
(4.13)
where A ∈ Rn&times;n , B ∈ Rn&times;p , C ∈ Rq&times;n , and D ∈ Rq&times;p are constant matrices.
Then, the solutions
x(t) = e
A(t−t0 )
Zt
x(t0 ) +
eA(t−τ ) Bu(τ ) dτ
t0
y(t) = CeA(t−t0 ) x(t0 ) + C
Zt
eA(t−τ ) Bu(τ ) dτ + Du(t)
t0
satisfy (4.12) and (4.13)
Proof. To find the solution of equations (4.12), (4.13) with dependence on
the initial state x(0) = x0 and the input u(t), we need to use the property
d At
e = AeAt ≡ eAt A
dt
Also, we can use the method of variant parameters.
Now, multiply both sides of (4.12) by e−At yields
e−At ẋ(t) − e−At Ax(t) = e−At Bu(t)
which can be written as:
d −At
e x(t) = e−At Bu(t)
dt
By integration from t0 7→ t,we obtain
τ =t
x(τ ) e−Aτ τ =t0
Zt
=
t0
159
e−Aτ Bu(τ ) dτ
Thus,
e−At x(t) − e−At0 x(t0 ) =
Zt
e−Aτ Bu(τ ) dτ
t0
Hence,
x(t) = e
A(t−t0 )
Zt
x(t0 ) +
eA(t−τ ) Bu(τ ) dτ
(4.14)
t0
Substituting (4.14) into (4.13) yields the solution of (4.13) as
A(t−t0 )
y(t) = Ce
Zt
x(t0 ) + C
eA(t−τ ) Bu(τ ) dτ + Du(t)
(4.15)
t0
Corollary 4.13.1. [10] Consider a continuous function f (t, τ ) then
∂
∂t
Zt
Zt f (t, τ ) dτ =
t0
∂
f (t, τ ) dτ + f (t, τ )|τ =t
∂t
(4.16)
t0
is true.
Proof.
Since equation (4.14) satisfies (4.12). Thus, differentiation of (4.14) should
gives the state equation (4.12):


Zt
d  A(t−t0 )
(4.17)
e
x(t0 ) + eA(t−τ ) Bu(τ ) dτ 
ẋ(t) =
dt
t0
applying (4.16) to the above equation yields,
ẋ(t) = AeA(t−t0 ) x(t0 ) +
Zt
AeA(t−τ ) Bu(τ ) dτ + eA(t−τ ) Bu(τ )
t0

= A eA(t−t0 ) x(t0 ) +
Zt

eA(t−τ ) Bu(τ ) dτ  + eA(0) Bu(t)
t0
160
τ =t
Substituting (4.14), yields
ẋ(t) = Ax(t) + Bu(t)
Definition 4.13.1. Let P ∈ Rn&times;n real non-singular matrix and let z = P x
an equivalent transformation. Then the state equation,
ż(t) = Āz(t) + B̄u(t)
ṫ(t) = C̄z(t) + D̄u(t)
is said to be equivalent to the state equations (4.12), (4.13).
where,
Ā = P AP −1 B̄ = P B C̄ = CP −1 D̄ = D
which obtained from (4.12) by substituting of:
x(t) = P −1 z
ẋ(t) = P −1 ż(t)
Lemma 4.13.1. Consider the LTI system
ẋ(t) = Ax(t)
where x ∈ Rn , A ∈ Rn&times;n then,
x(t) = x0 eAt
is the solution of the system with the initial condition x(t0 ) = x0 , where eAt
is an n &times; n matrix defined by its Taylor series.
Example 4.13.1. Assume the linear system
ẋ1 = −x1
ẋ2 = 2x2
This system can be written in the form ẋ = Ax where,
−1 0
A=
0 2
161
(4.18)
The general solutions of the system (4.18) are given by
x1 (t) = c1 e−t
x2 (t) = c2 e2t
Alternately,
−t
e
0
x(t) =
x ≡ X̄(t)x0
0 e2t 0
4.13.2
Solutions of Linear Time-Variant(LTV) State
Equations
(A) LTV Homogeneous Time-Varying Equations:
Lemma 4.13.2. [11] Consider the LTV scalar equation
ẋ(t) = a(t)x(t)
(4.19)
where a(t) is a continuous function, t ∈ [t0 , t]. Then
Rt
a(τ )dτ
x(t) = x0 e t0
is the solution of the homogeneous equation (4.19) that satisfying the
initial condition x(t0 ) = x0 .
Lemma 4.13.3. [11] Consider the LTV equation
ẋ(t) = A(t)x(t)
(4.20)
where A(t) = (Aij (t)) is an n &times; n matrix with continuous components,
t ∈ [t0 , t]. Then
Rt
A(τ )dτ
x(t) = x0 e t0
is the solution of the homogeneous equation (4.20).
Rt
A(τ )dτ
If we set ρ(t) = e t0
, then we can express the solution as x(t) =
x0 ρ(t), which is representing the one-dimensional version of the n
T
dimensional linear system ẋ = A(t) x, where x = x1 &middot; &middot; &middot; xn and ρ(t)
˙ = A(t)ρ(t).
is a fundamental matrix satisfying the matrix ODE ρ(t)
162
Corollary 4.13.2. In the case where A(t) = A, we have a Taylor series
representation of the matrix exponential:
ρ(t) = eA t = I + At + A2
t2
+ &middot;&middot;&middot;
2!
(B) LTV Non homogeneous Equations:
Theorem 4.13.2. [11] Consider the LTV state equation
ẋ(t) = a(t)x(t) + b(t)
(4.21)
where a(t), b(t) are continuous functions, t ∈ [t0 , t]. Then
&quot;
#


Rt
Z t − Rτ a(s)ds
a(τ )dτ
x0 + e t0
b(τ )dτ 
x(t) = e t0
t0
or equivalently,
−1
x(t) = ρ(t)ρ(t0 )
Zt
x0 + ρ(t)
ρ−1 (τ )b(τ )dτ
t0
or equivalently,
Zt
x(t) = ρ(t) x0 + ρ(t)
ρ−1 (τ )b(τ )dτ
t0
is the solution of (4.21) that satisfying the initial condition x(t0 ) = x0 ,
Rt
where ρ(t) = e t0
a(τ )dτ
and ρ(t0 )−1 = 1.
Proof. By using the variation of parameters technique such that we can
finding a solution of the form
x(t) = ρ(t)u(t)
(4.22)
ẋ(t) = ρ(t)u̇(t) + ρ̇(t)u(t)
(4.23)
Thus, by differentiation
163
Substituting (4.21) into (4.23) yield:
ẋ(t) = ρ(t)u̇(t) + ρ̇(t)u(t) = a(t)x(t) + b(t)
Then, by using (4.22) we obtain:
ẋ(t) = ρ(t)u̇(t) + ρ̇(t)u(t) = a(t)Φ(t)u(t) + b(t)
Since
ρ̇(t) = a(t)ρ(t)
Doing terms by terms comparative, we obtain
ρ(t)u̇(t) = b(t) ⇒ u̇(t) = ρ−1 (t)b(t)
Also,
ρ̇(t) = a(t)ρ(t) ⇒
Rt
a(τ )dτ
ρ(t) = e t0
(4.24)
If the initial condition is x(t0 ) = x0 then
x(t0 ) = ρ(t0 )u(t0 ) = x0
Therefore, we have to set ρ(t0 ) = 1 to let u(t0 ) = x0 .
Then, the solution will be
Zt
u(t) = x0 +
ρ−1 (τ )b(τ ) dτ
(4.25)
t0
−
Rτ
a(s) ds
where ρ−1 = e t0
and x0 ≡ u(t0 ). Substituting (4.25) and (4.24)
into (4.22) complete the proof.
(C) LTV Non-homogeneous Equations with controlling u(t):
Consider the LTV state equation
ẋ = A(t)x(t) , A(t) ∈ Rn&times;n
164
(4.26)
Assume there will be a unique solution xi (t) for every initial state xi (t0 )
for i = 0, 1, &middot; &middot; &middot; , n. Thus, there are n solutions corresponding to these
initial conditions
which can be formed as a columns in square matrix
ρ(t) = x1 (t) x2 (t) &middot; &middot; &middot; xn (t) of order n.
Since every solution xi (t) satisfies (4.26), then we can write
ρ̇ = A(t)ρ(t)
(4.27)
where the matrix ρ(t) is called the fundamental solution matrix if the n
initial states are linearly independent(equivalently, ρ(t0 ) matrix is nonsingular). But since we can choose an arbitrary initial states, which
they are linearly independent, the fundamental matrix is not unique.
Definition 4.13.2. [12] Let ρ(t) be any fundamental matrix of the
homogeneous system ẋ = A(t)x and satisfy ρ̇ = A(t)ρ(t). Then
def
Φ(t, t0 ) = ρ(t)ρ−1 (t0 )
is called the state transition matrix of ẋ(t) = A(t)x(t). The state
transition matrix is also the unique solution of
∂
Φ(t, t0 ) = A(t)Φ(t, t0 )
∂t
(4.28)
with the initial condition Φ(t0 , t0 ) = I.
Moreover,
• Φ(t, t) = I
−1
• Φ−1 (t, t0 ) = [ρ(t)ρ−1 (t0 )]
= ρ(t0 )ρ−1 (t) = Φ(t0 , t)
• Φ(t, t0 ) = Φ(t, t1 )Φ(t1 , t0 )
Theorem 4.13.3. [12] Consider the LTV state equation with u(t) as
a controller
ẋ(t) = A(t)x(t) + B(t)u(t)
(4.29)
where A is continuous matrix (where the sufficient condition for the
existence of unique solutions is to require that all elements aij of A(t)
be continuous) and B(t) is 2 &times; 1 matrix, ∀ t ∈ [t0 , t]. Then,
x(t) = ρ(t)ρ−1 (t0 )x0 +
Zt
t0
165
ρ(t)ρ−1 (τ )B(τ )u(τ )dτ
or equivalently,
Zt
Φ(t, τ )B(τ )u(τ )dτ
x(t) = Φ(t, t0 )x0 +
(4.30)
t0
In case of A(t) = A,then[9]
Zt
x(t) = Φ(t, t0 )x0 +
eA(t−τ ) B(τ )u(τ )dτ
t0
or equivalently[10],
Zt
Φ(t − τ )B(τ )u(τ )dτ
x(t) = Φ(t, t0 )x0 +
t0
is the solution of (4.29) that satisfying the initial condition x(t0 ) =
x0 where Φ is the transition matrix solution that satisfying the initial
condition Φ(t0 , t0 ) = I. Usually if A(t) = A, then Φ(t, τ ) = eA(t−τ ) .
Proof. [11] Applying the concept of the variation parameters method,
we want to find a solution of the form
x(t) = ρ(t)M (t)
by differentiation,
ẋ = ρṀ + ρ̇M
Since,
ẋ(t) = A(t)x(t) + B(t)u(t)
Thus,
ρṀ + ρ̇M = ẋ(t) = A(t)x(t) + B(t)u(t)
Comparing terms by terms,yields
Rt
A(τ ) dτ
ρ̇(t) = A(t) ρ(t) = ⇒ ρ(t) = et0
ρṀ = B(t)u(t) ⇒ Ṁ = ρ−1 B(t)u(t)
166
(4.31)
(4.32)
Also,
x(t0 ) = ρ(t0 )M (t0 ) = x0
Rt0
which means that M (t0 ) = x0 and ρ(t0 ) = et0
Now, the solution of (4.32) is
Zt
M (t) = M (t0 ) +
A(τ ) dτ
= e0 = I
ρ−1 (τ )B(τ )u(τ ) dτ
t0
Hence,
x(t) = ρ(t)M (t)
Zt
= ρ(t)x0 + ρ(t)
ρ−1 (τ )B(τ )u(τ ) dτ
t0
Zt
= ρ(t)x0 +
ρ(t)ρ−1 (τ )B(τ )u(τ ) dτ
t0
A(t−t0 )
= e
Zt
x0 +
eA(t−t0 ) e−A(τ −t0 ) B(τ )u(τ ) dτ
t0
= eA(t−t0 ) x0 +
Zt
eA(t−τ ) B(τ )u(τ ) dτ
t0
Zt
Φ(t − τ )B(τ )u(τ ) dτ
= Φ(t, t0 )x0 +
t0
Zt
= Φ(t, t0 )x0 +
Φ(t, τ )B(τ )u(τ ) dτ
t0
167
(4.33)
Moreover,
Rt
ρ(t)ρ−1 (τ ) = et0
−
= e
−
= e
+
Rt0
e
t0
A(s)ds
−
t
Rτ
Rτ
−
A(s)ds
e
A(s)ds
Rτ
A(s)ds
t0
A(s)ds
t
Rt
A(s)ds
= e τ
= Φ(t, τ )
Also, if A(t) is constant[10], we have
Φ(t, τ ) = eA(t−t0 ) = Φ(t − τ )
where ρ(t) is called the fundamental matrix and Φ(t, τ ) is known as the
state transition matrix solution of
∂
Φ(t, t0 ) = A(t)Φ(t, t0 )
∂t
that satisfying the initial condition Φ(t0 , t0 ) = I.
Corollary 4.13.3. [10] Consider the homogeneous state equation
ẋ(t) = A(t)x , x(t0 ) = x0
(4.34)
If the input is identically zero, then according to (4.30) of Theorem
(4.13.3):
x(t) = φ(t, t0 )x0
is the solution of the state equation ẋ = A(t)x0 .
Proof. [12] Since we can write the LTV state equation
ẋ = A(x)x
as a fundamental solution matrix
˙ = A(t)ρ(t)
ρ(t)
168
So, the solution to equation (4.34) with an arbitrary initial condition
vector x(t0 ) is
x(t) = ρ(t)ρ(t0 )−1 x0
Now, by differentiation we obtain,
−1
−1
˙
ẋ(t) = ρ(t)ρ(t
0 ) x0 = A(t)ρ(t) ρ(t0 ) x0 = A(t)x(t)
Thus, by comparing each terms by terms, we conclude that
x(t) = ρ(t) ρ(t0 )−1 x0
= Φ(t, t0 )x0
4.13.3
Computing The Transition Matrix Through The
Peano-Baker Series
since computing state transition matrix Φ(t, t0 ) is generally difficult, it is
convenient to define the transition matrix Φ(t, τ ) by the Peano-Baker series[9]
Zt
Φ(t, τ ) = I +
A(σ1 ) dσ1 +
τ
τ
A(σ1 )
τ
A(σ1 )
Zσ1
Zt
+
Zσ1
Zt
A(σ2 ) dσ2 dσ1
τ
Zσ2
A(σ3 ) dσ3 dσ2 dσ1 + &middot; &middot; &middot; (4.35)
A(σ2 )
τ
τ
Now, to know how to obtain on this result, consider the Equation (4.34)
with arbitrary initial time t0 , initial state x0 and arbitrary time T &gt; 0, we
will construct a sequence of n &times; 1 vector functions {xk }∞
k=0 , defined on the
interval [t0 , t0 + T ] as a sequence of approximation solutions of (4.34).
169
The sequence of approximations is defined in an iterative manner as
x0 (t) = x0
Zt
A(σ1 ) x0 (σ1 ) dσ1
x1 (t) = x0 +
t0
Zt
A(σ1 ) x1 (σ1 ) dσ1
x2 (t) = x0 +
t0
..
.
Zt
xk (t) = x0 +
A(σ1 ) xk−1 (σ1 ) dσ1
(4.36)
t0
this iterative can be written as
Zt
Zt
Zσ1
xk (t) = x0 + A(σ1 ) x0 dσ1 + A(σ1 ) A(σ2 ) x0 dσ2 dσ1
t0
t0
+ &middot;&middot;&middot; +
t0
σ
Zk−1
Zσ1
Zt
A(σ2 ) &middot; &middot; &middot;
A(σ1 )
t0
A(σk )x0 dσk &middot; &middot; &middot; dσ1 (4.37)
t0
t0
By taking the limit of the sequence and letting k 7→ ∞,
Zt
x(t) = x0 +
Zσ1
Zt
A(σ1 ) x0 dσ1 +
t0
t0
t0
A(σ2 ) &middot; &middot;
A(σ1 )
t0
A(σ2 ) x0 dσ2 dσ1
σ
Zk−1
Zσ1
Zt
+ &middot;&middot;+
A(σ1 )
t0
A(σk )x0 dσk &middot; &middot;dσ1 + &middot; &middot; &middot; (4.38)
t0
By factoring x0 ,we obtain

Zt
Zt
Zσ1
x(t) = I + A(σ1 ) dσ1 + A(σ1 ) A(σ2 ) dσ2 dσ1
t0
t0
+ &middot;&middot;+
A(σ2 ) &middot; &middot;
A(σ1 )
t0
t0
σ
Zk−1
Zσ1
Zt
t0
A(σk ) dσk &middot; &middot;dσ1 + &middot; &middot; &middot;  x0
t0
170

and denoting the n&times;n matrix series on the right side by Φ(t, t0 ), the solution
can be written as
x(t) = Φ(t, t0 )x0
Theorem 4.13.4. [9] For any t0 and x0 the linear state x(t) = A(t)x with
A(t) is continuous ,has the unique solution
x(t) = Φ(t, t0 )x0
where the transition matrix Φ(t, t0 ) is given by the Peano-backer series that
converges uniformly for t and τ ∈ [−T, T ], where T &gt; 0 is arbitrary.
Example 4.13.2. consider the LTI state equation ẋ = Ax for A = a,then
by using the approximation sequence in (4.36). The generated result is:
x0 (t) = x0
t − t0
1!
t − t0
(t − t0 )2
x2 (t) = x0 + ax0
+ a2 x 0
1!
2!
..
.
k
t − t0
k (t − t)
xk =
1+a
+ &middot;&middot;&middot; + a
x0
1!
k!
x1 (t) = x0 + ax0
and by taking the limit of the sequence, we get the solution:
x(t) = ea(t−t0 ) x0
Example 4.13.3. Consider
0 t
A(t) =
0 0
the Peano-Baker is
Zt Zt Zt 1 0
0 σ1
0 σ1
0 σ2
Φ(t, τ ) =
+
dσ1 +
dσ2 dσ1 + &middot; &middot; &middot;
0 1
0 0
0 0
0 0
τ
τ
τ
Since the third term and beyond are zero (verified),thus ,
1 2
1 2 (t − τ 2 )
Φ(t, τ ) =
0
1
171
Example 4.13.4. Consider the the following scalar equations
ẋ1 (t) = x1 (t) , x1 (t0 ) = x10
ẋ2 (t) = a(t)x2 (t) + x1 (t) , x2 (t0 ) = x20
(4.39)
(4.40)
which can be written as
1 0
ẋ = A(t)x , with A(t) =
1 a(t)
the solution of (4.39) is,
x1 (t) = et−t0 x10
the second scalar equation (4.40) can be written as
ẋ2 (t) = a(t)x2 (t) + et−t0 x10
which is identical with the form of a forced scalar state equation (4.29) where
B(t)u(t) = et−t0 x10 .
the transition matrix for scalar a(t) is computed from the solution of the
homogeneous equation (4.19),
Rt
a(τ )dτ
Φ(t, t0 ) = et0
Applying the general solution (4.30) gives

et−t0

Rt
x(t) = Rt τ −t a(σ)dσ
0 τ
e
e
dτ
0
Rt
e
t0

a(τ )dτ  x0

t0
Property 4.13.1. [9] If we have a constant matrix A(t) = A, then the
Peano-Baker series becomes
A(σ2 ) &middot; &middot; &middot;
A(σ1 )
t0
σ
Zk−1
Zσ1
Zt
t0
A(σk )x0 dσk &middot; &middot; &middot; dσ1
t0
k
σ
Zk−1
Z t Zσ1 Zσ2
&middot;&middot;&middot;
=A
t0 t0 t0
1dσk &middot; &middot; &middot; dσ1
t0
= Ak
172
(t − t0 )k
k!
(4.41)
Property 4.13.2. [9] If A(t) = A, an n &times; n constant matrix, then the
transition matrix is
Φ(t, τ ) = eA(t−tτ )
where the exponential matrix is defines by the power series
eAt =
∞
X
1 k k
A t
k!
k=0
that converges uniformly on [−T, T ],where T &gt; 0 is arbitrary.
Example 4.13.5. Consider the following matrix
a(t) a(t)
A(t) =
0
0
where a(t) is a continuous scalar function. Since,
t

R
Rt
Zt
a(σ)dσ
a(σ)dσ 
A(σ)dσ =  τ
τ
0
0
τ
Thus,

Rt
φ(t, τ ) = eτ
a(σ)dσ
0
Rt
eτ

a(σ)dσ
− 1
1
4.14
Solutions of Discrete Dynamical Systems
4.14.1
Discretization of Continuous-Time Equations
(A) Weak Discretization Method
Theorem 4.14.1. [9] Consider the continuous-time state equations
ẋ(t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
(4.42)
(4.43)
Assume that {t0 , t1 , &middot; &middot; &middot; , tk , tk+1 , &middot; &middot; &middot; } is a set of discrete time points.
Thus, the transferred discrete-time state space equations for (4.47) and
(4.48) become,
x [k + 1] = (I + T A)x [k] + T Bu [k]
y [k] = Cx [k] + Du [k]
173
where x(t) and y(t) computed only at t = kT ∀ k = (0, 1, &middot; &middot; &middot; ) and
def
x(t) = x(kT ) = x[k]
for all kT ≤ t &lt; (k + 1)T, T = tk+1 − tk .
Proof. Since
x(t + T ) − x(t)
T 7→0
T
which can be approximated as
ẋ(t) = lim
ẋ(t)T + x(t) ≈ x(t + T )
we can use this idea to approximate (4.47) as
x(t + T ) = x(t) + Ax(t)T + Bu(t)T
(4.44)
if we compute x(t) only at t = kT where k = {0, 1, &middot; &middot; &middot; }, then (4.47)
and (4.48) become
x ((k + 1)T ) = (I + T A)x(kT ) + T Bu(kT )
y(kT ) = Cx(kT ) + Du(kT )
If we define
x(t) = x(kT ) = x [k] ∀ kT ≤ t &lt; (k + 1)T
then the above equations can be written as
x [k + 1] = (I + T A)x [k] + T Bu [k]
y [k] = Cx [k] + Du [k]
(4.45)
(4.46)
(B) Accurate Discretization Method
Theorem 4.14.2. [9] Consider the continuous-time state equations
ẋ(t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
174
(4.47)
(4.48)
Assume that {t0 , t1 , &middot; &middot; &middot; , tk , tk+1 , &middot; &middot; &middot; } is a set of discrete time points.
Thus, the transferred discrete-time state space equations for (4.47) and
(4.48) become,
x[k + 1] = Ad x[k] + Bd u[k]
y[k] = Cd x[k] + Dd u[k]
AT
, Bd =
T
R
Aτ
e
(4.49)
(4.50)
dτ B, Cd = C, and Dd = D.
0
Moreover, x(t) and y(t) computed only at t = kT ∀ k = (0, 1, &middot; &middot; &middot; ) and
def
x(t) = x(kT ) = x[k]
for all kT ≤ t &lt; (k + 1)T, T = tk+1 − tk .
Proof. Since the solution of (4.47) equals
Zτ
Φ(t − τ )B(τ )u(τ )dτ
x(t) = Φ(t)x0 +
t0
with Φ(t) = eAt , t0 = 0
Then, computing the above solution at t = kT yields,
x[k] = x(kT )
AkT
=e
ZkT
x0 +
eA(kT −τ ) Bu(τ ) dτ
(4.51)
0
Again, computing at t = (k + 1)T yields,
x[k + 1] = x((k + 1)T )
= eA(k+1)T x0 +
(k+1)T
Z
0
175
eA((k+1)T −τ ) Bu(τ ) dτ
(4.52)
Equation (4.52) can be written as
x[k + 1] =eAT


eAkT x0 +

(k+1)T
Z
+
ZkT
eA(kT −τ ) Bu(τ ) dτ



0
eA((k+1)T −τ ) Bu(τ ) dτ
(4.53)
kT
Combining (4.51) into (4.53) and substituting for s = (k + 1)T − τ
yields,
 T

Z
(4.54)
x[k1 ] = eAT x[k] +  eAs ds Bu[k]
0
Alternatively, we can write (4.54) as
x[k + 1] = Ad x[k] + Bd u[k]
y[k] = Cd x[k] + Dd u[k]
where,
AT
, Bd =
T
R
Aτ
e
dτ B, Cd = C, and Dd = D.
0
Lemma 4.14.1. If A is non-singular matrix then, Bd =
T
R
0
be equivalently replaced by
Bd = A−1 (Ad − I)B
Proof. We have
ZT
e
0
Aτ
ZT dτ =
2
2 tau
I + Aτ + A
2!
+ &middot;&middot;&middot;
0
T2
T3 2
= TI +
A+
A + &middot;&middot;&middot;
2!
3!
176
dτ
e
Aτ
dτ B can
If A is non-singular then,
T2 T3 2
−1
TA +
A
+
A + &middot; &middot; &middot; + I − I = A−1 (eAT − I)
2!
3!
Thus,
Bd = A−1 (Ad − I)B
4.14.2
Solution of Discrete-Time Equations
Theorem 4.14.3. [9] Consider the discrete-time state equations
x[k + 1] = Ad x[k] + Bd u[k]
y[k] = Cd x[k] + Dd u[k]
Then, the most general solutions of two equations are:
k
x[k] = A x[0] +
k−1
X
Ak−1−i Bu[i]
i=0
k−1
X
y[k] = CAk x[0] +
CAk−1−i Bu[i] + Du[k]
i=0
Alternatively,
x[k] = Φ[k, 0]x[0] +
k
X
Φ(k, i)B(i − 1)u(i − 1)
i=1
Proof. Since
x[k + 1] = Ad x[k] + Bd u[k]
Compute for k = 0, 1, 2 &middot; &middot; &middot; , yields,
x[1] = Ax[0] + Bu[0]
x[2] = Ax[0] + Bu[1] = A2 x[0] + ABu[0] + Bu[1]
..
.
k
x[k] = A x[0] +
k−1
X
Ak−1−i Bu[i]
i=0
177
A change in summation index allows us to rewrite the solution as
k
x[k] = A x[0] +
k
X
Ak−i B(i − 1)u(i − 1)
i=1
Alternatively, whenever A is constant, the discrete transition matrix is given
by
Φ[k, i] = Ak−i
Thus,
x[k] = Φ[k, 0]x[0] +
k
X
Φ(k, i)B(i − 1)u(i − 1)
i=1
Corollary 4.14.1. Consider the discrete-time state equations
x[k + 1] = Ad [k]x[k] + Bd [k]u[k]
y[k] = Cd [k]x[k] + Dd [k]u[k]
Then, the most general solutions with initial state x[k0 ] = x0 and the input
u[k] for all x ≥ k0 are given by:
x[k] = Φ[k, k0 ]x0 +
k−1
X
Φ[k, i + 1]B[i]u[i]
i=k0
y[k] = C[k]Φ[k, k0 ]x0 + C[k]
k−1
X
Φ[k, i + 1]B[i]u[i] + D[k]u[k]
i=k0
where
Φ[k, k0 ] = A[k − 1]A[k − 2] &middot; &middot; &middot; A[k0 ]
for k &gt; k0 and Φ[k0 , k0 ] = I.
4.15
Solutions of Van Der Pol’s Equation
4.15.1
Parameter Perturbation Theory
Consider the system
ẋ(t) = f (t, x, ) , x(t0 ) = &micro;()
178
(4.55)
where f : [t0 , t1 ] &times; D &times; [−, ] 7→ Rn , and allow the initial state depending on
.
The solution of (4.55) will depend on the parameter by writing the solution
as x(t, ). Our goal is to construct approximate solutions that will be valid for
sufficiently small perturbations ||. The simplest approximation is produced
by setting = 0, to obtain the unperturbed equation
ẋ(t) = f (t, x, 0) , x(t0 ) = &micro;(0)
(4.56)
Hence, equation (4.56) now can be solved more readily. One seek to to find
the approximate solution for small ,in powers of ; that is,
x(t, ) = x0 (t) + x1 (t) + 2 x2 (t) + &middot; &middot; &middot;
(4.57)
where x0 (t) is the first solution of the unperturbed equation (4.56) for = 0,
and x1 , x2 , &middot; &middot; &middot; , xn are solutions that independent of and must be determined
by substitute the expansion (4.57) into (4.55) and collect the coefficients of
each like powers of [24].
4.15.2
Solution of Van Der Pol Equation Via Parameter Perturbations Method[24]
Consider Van Der Pol’s equation
ẍ + (1 − x2 )ẋ + x = 0
(4.58)
For small , if = 0, we obtain the unperturbed equation
ẍ + x = 0
(4.59)
x = a cos(t + θ)
(4.60)
with the general solution
where a, θ are constants. To try determine the approximated solution of
(4.58), we seek a perturbation expansion of the form
x(t, ) = x0 (t) + x1 (t) + 2 x2 (t) + &middot; &middot; &middot;
(4.61)
Substituting of (4.61) into (4.58) yields,
2
2
d
d
d2
2
x0 + x0 + x1 + x1 + x 2 + x2 + &middot; &middot; &middot;
dt2
dt2
dt2
2
d
d
2
2
2d
= 1 − (x0 + x1 + x2 + &middot; &middot; &middot; )
x0 + x1 + x2 + &middot; &middot; &middot;
(4.62)
dt
dt
dt
179
Expanding the right-hand side,we obtain
2
2
d
d2
d
2
x0 + x0 + x1 + x1 + x2 + x2 + &middot; &middot; &middot;
dt2
dt2
dt2
d
2 d
2 d
2
= 1 − x0
x0 + (1 − x0 x1 − 2x0 x1 x0 + &middot; &middot; &middot;
dt
dt
dt
(4.63)
Collecting the coefficients of like powers of on both sides with equating,
yield
• Coefficient of 0
d2
x0 + x0 = 0
dt2
(4.64)
d
d2
x1 + x1 = (1 − x0 2 ) x0
2
dt
dt
(4.65)
d2
d
d
x2 + x2 = (1 − x0 2 ) x1 − 2x0 x1 x0
2
dt
dt
dt
(4.66)
• Coefficient of 1
• Coefficient of 2
Substituting the first solution (4.60) of equation (4.64) into (4.65),we have
d2
2
2
x
+
x
=
−
1
−
a
cos
(t
+
θ)
a sin(t + θ)
1
1
dt2
(4.67)
Since,
cos2 (t + θ) sin(t + θ) =
sin(t + θ + sin 3(t + θ)
4
equation (4.65) can be written as
d2
a3 − 4a
1
x
+
x
=
sin(t + θ) + a3 sin 3(t + θ)
1
1
2
dt
4
4
(4.68)
its particular solution is
u1 = −
1
a3 − 4a
t cos(t + θ) − a3 sin 3(t + θ)
8
32
(4.69)
In a similar fashion, we can find x3 (t), x4 (t), &middot; &middot; &middot; , then substitute them to
(4.61) to get the approximation solution of Van Der Pol.
180
Figure 4.1: (a) Basic oscillator circuit ; (b) Typical nonlinear driving-point
characteristic
Example 4.15.1. Figure 4.1 shows the basic circuit structure of an important class of electronic oscillators. The inductor and capacitor are assumed
to be linear,time-invariant and passive, that is, L &gt; 0 and C &gt; 0. The resistive element is an active circuit characterized by the voltage-controlled i-v
characteristic i = h (v). The function h (&middot;) satisfies the conditions
h (0) = 0 ; h0 (0) &lt; 0
h (v) → ∞ as v → ∞ , and h (v) → −∞ as v → −∞
where h0 (v) is the first derivative of h (v) with respect to v such i − v characteristic can be represented in two tunnel-diode circuit of Figure 4.2.
Figure 4.2: (a) Basic oscillator circuit ; (b) Typical nonlinear driving-point
characteristic
Using kirchhoffs current law, we can write the equation
iC + iL + i = 0
Hence,
C
dv
1 t
+
∫ v (s) ds + h (v) = 0
dt L −∞
181
where
t
iL = ∫ v (t) dt
t0
Differentiating once with respect to t and multiplying through by L, we obtain
CL
d2 v
dv
=0
+ v + Lh0 (v)
2
dt
dt
This equation can be written in a form that is well-known in nonlinear systems theory.
√
Let us change the time variable from t to τ = kt t where = LC. The derivatives of v with respect to t and τ are related by
dv
dv
=
dτ
dt
dv √
dt
dv
=
= LC
dτ
dt
dt
2
2
d2 v
dv 2 dv
=
=
LC
dτ 2
dt2
dt2
By setting
dv
dτ
≡ v̇, we can write the circuit equation as
r
0
v̈ + h (v)v̇ + v = 0 , =
L
C
when
1
h(v) = −v + v 3
3
the circuit equation takes the form
v̈ − (1 − v 2 )v̇ + v = 0
which is known as the Van Der Pol equation.
This equation is used by Van Der Pol to study oscillations in vacuum tube
circuits, is a fundamental example in nonlinear oscillation theory.
writing this equation in state-space model
ẋ1 = x2
0
ẋ2 = −x1 − h (x1 )x2
182
The system has only one equilibrium point at x1 = x2 = 0. The Jacobian
matrix at this point (0, 0) is given by
∂f
0
1
|x=0 =
A=
0
−1 −h (0)
∂x
0
since h (0) &lt; 0, the origin is either an unstable node or unstable focus de0
pending on the value of h (0) .
4.16
Limit Cycles
Oscillation is one of the most phenomena that occurs in dynamical systems.
A system oscillate when it has a periodic solution
x(t + T ) = x(t), ∀ t ≥ 0
for some T &gt; 0.
Definition 4.16.1. [4] Consider dynamical system of the form
ẋ = f (x(t))
, x(0) = x0 , t ∈ R
(4.70)
A solution s(t, x0 ) of (4.70) is periodic if there exists a finite time T &gt; 0
such that s(t, x0 ) = s(t + T, x0 ) for all t ∈ R.
As we have seen in Example 1.6.3 and in case 2 of section 1.1.4, the
second-order linear system with eigenvalues &plusmn;jβ, thus the origin of that
system is a center, and the trajectories are closed orbits.
When the system is transformed into its real Jordan form, the solution is
given by
z1 (t) = r0 cos(βt + θ0 ), z2 (t) = r0 sin(βt + θ0 )
where
q
z
(0)
2
−1
r0 = z12 (0) + z22 (0), θ0 = tan
z1 (0)
Therefore, the system has a sustained oscillation of amplitude r0 . It is usually
referred to as the harmonic oscillator .
183
Example 4.16.1. Consider the nonlinear dynamical system
ẋ1 (t) = −x2 (t) + x1 (t) 1 − x21 (t) − x22 (t) , x1 (0) = x10 , t ≥ 0 (4.71)
ẋ2 (t) = x1 (t) + x2 (t) 1 − x21 (t) − x22 (t) , x2 (0) = x20
(4.72)
By transforming the system into terms of polar coordinates:
q
, r(0) = r0 = x210 + x220 , t ≥ 0
x20
, θ(0) = θ0 = arctan
x10
ṙ(t) = r(t) 1 − r2 (t)
(4.73)
θ̇(t) = 1
(4.74)
it can be shown that the system has an equilibrium point at the origin; that
is, x = [x1 x2 ] = 0. All the solutions of the system starting from initial conditions (x10 , x20 ) that are not on the unit circle Ωc = {x ∈ R2 : x21 + x22 = 1}
approach the unit circle. In particular,
• since ṙ &lt; 0 and when r &gt; 1, all solutions starting outside the unit circle
go toward the unit circle.
• since ṙ &gt; 0 and when r &lt; 1, all solutions starting outside the unit circle
go inward the unit circle (see Figure 4.3).
Once again,using separation of variables with (4.73) and (4.74), the solution
is given by
r(t) =
1+
1
A2
θ(t) = t + θ0
−2t
e
−1
2
(4.75)
(4.76)
equation (4.75) shows that the dynamical system has a periodic solution corresponding to r = 1; that is, x210 + x220 = 1. Hence, the dynamical systems
(4.71) and (4.72) have a periodic orbit or limit cycle.
184
Figure 4.3: Limit cycle of example 4.16.1
Theorem 4.16.1 (Bendixson Criterion[1]).
Consider the nonlinear dynamical system
ẋ1 = f1 (x1 , x2 )
ẋ2 = f2 (x1 , x2 )
a limit cycle exists in domain Ω ∈ R2 of the phase plane if
∂f1
∂x1
∂f2
+ ∂x
vanishes
2
Proof. From the slope of the phase trajectories
dx2
dx1
f2 (x1 , x2 )
f1 (x1 , x2
→ f2 dx1 = f1 dx2
=
where f2 dx1 = f1 dx2 satisfies any system trajectories including any closed
shapes or the limit cycle.
and by integrating both sides along the closed curve C of a limit cycle, we
obtain
I
(f1 dx2 − f2 dx1 ) = 0
C
185
By Greens Theorem which state that:
Let C any closed curve and let D be the region enclosed by the
curve. If P and Q are two vector functions and they have continuous first order partial derivatives on the region D then
ZZ I
∂Q ∂P
−
dA
P dx + Q dy =
∂x
∂y
D
C
Hence,
ZZ I
(f1 dx2 − f2 dx1 ) =
C
∂f2 ∂f1
−
∂x
∂y
dx1 dx2 = 0
D
Example 4.16.2. Consider the system
ẋ1 = g(x2 ) + 4x2 x22
ẋ2 = h(x1 ) + 4x21 x2
(4.77)
(4.78)
Since
∂f2
∂f1
+
= 4(x21 + x22 )
∂x1 ∂x2
which is always strictly positive(no sign change can be happen), the system
not have a limit cycle.
186
Conclusion
This MSc thesis is dedicated to the stability of switched linear systems. The research is focused on continuous time switched systems
that govern by a designated switching law. The stability problems
are treated in framework of the Lyapunov Theory.
In this thesis, in chapter 1, we introduced linearization of nonlinear
systems to make it easy to stabilize the dynamical systems. Also,
we described the methods we need to, which help us constructing
the phase portrait that gives a view on how the trajectory of the
a dynamical system solution may go on.
One of the most useful tools in studying the stability of a dynamical systems is to understand the concepts of Lyapunov functions.
Meanwhile, solving the dynamical systems are tedious, but building a Lyapunov function candidate is more powerful method to
observing the parameters that force the system to be stable without need to search for a solutions for its state equation.
In chapter 2, we show how to stabilize the switched system using
the slow switching topics where the switching between two successive switch instants no less than a specified time so-called the
Dwell-Time. Hence, the concept of dwell time guarantees the stability of the switched systems. Dwell-time revolved to a new tool
called ADT (Average dwell time) that gives a method on how to
construct switching signal law which forcing the switched system
to be asymptotically stable. The description of ADT is provided
in chapter 3.
187
Searching for a common Lyapunov function is not easy to obtained.
If we have more than two Hurwitz subsystems, then it may be too
hard to determine a QCLF that we need to study the stability of
the switched systems. Therefore, the idea of multiple Lyapunov
functions comes to solve many of switched systems problems.
Finally, in chapter 3, where the challenge has been came to study
the situation, where all the subsystems are unstable. Thus, with
all subsystems are unstable,the stability of switched systems focus on computation of minimal and maximal admissible dwell time
{τmin , τmax }.
Therefore, the switching law are confined by pairs of lower and upper bounds as {τmin , τmax } which means the activation time of each
mode requires to be neither too long nor too small and confined
in the section [τmin , τmax ] to ensure the GUAS of the switched systems. This concept based on realize the discretization technique
that dividing the domain of the defined matrix function Pi (t) into
a finite discrete points over the interval [tn , tn+1 ].
188
Bibliography
[1] H.Khalil,Nonlinear Systems, 2nd edition, Michigan State University,1996
[2] D.Liberzon, switching in Systems and Control, 1973
[3] Verhulst, F.(Ferdinand), Nonlinear differential equations and dynamical
systems, 1985
[5] Slotine, J.-J.E.(Jean-Jacques E.), Applied nonlinear control, Weiping Li,
Massachusetts Institute of Technology, 1991
[6] Lawrence Perko, Differential equations and dynamical systems, Lawrence
Perko 3rd, 2000
Linear Systems, by Kai Wulff, Dr.Robert Shorten, Department of Computer Science, National University of Ireland, Maynooth, Maynooth, December 2004
[8] I.J.S.Sarna, Mathematics for Engineers with Essential Theory, Delhi College of Engineering, Delhi University, Delhi, 1st Edition 2011
[9] Wilson J.Rugh, Linear System Theory, 2nd Ed, Department of Electrical
and Computer Engineering The Johns Hopkins University, 1996
[10] Chi Tsong Chen, Linear System Theory and Design, 3rd Ed, Oxford
University Press, 1999
[11] David A.Sanchex, Ordinary Differential Equations, Texas A&amp; M University
189
[12] W.Borgan,Modern Control Theory ,3rd Ed, University of Nevada, 1991
[13] K.Narendra and Jeyendran Balakrishnan, A Common Lyapunov Function for Stable LTI Systems with Commuting A-Matrices,Department of
Electrical Engineering, Yale University, USA, 1993
[14] Bo Hu,Xuping Xu, Anthony N.Michel, Panos J.Antsaklis, Stabilty Analysis for a Class of Nonlinear Switched Systems, Department of Electrical
Engineering, university of Notre Dame, USA, 1999
[15] Guisheng Zhai, Bo hu,Kazunori Yasuda, Piecewise Lyapunov Functions
for Switched Systems with Average Dwell Time, Faculty of Systems Engineering, Wakayama University , Japan, Anthony N.Michel, Department
of Electrical Engineering , university of Note Dame,USA
[16] Philippos Peleties and Raymond DeCarlo, Asymptotic Stability of mSwitched Systems Using Lyapunov-Like Functions, School of Electrical
Engineering, Purdue University, IN 47907
[17] L.Hetel, Robust Stability and Control of Switched Linear Systems, Nancy
University, France 2007
[18] Guisheng Zhai, Xinkai Chen, Shigemasa Takai, and Kazunori Yasuda,Stability and H∞ Disturbance Attenuation Analysis for LTI Control
Systems with Controller Failures,Asian Journal of Control, Vol.6, No.1,
pp.104-111, March 2004
[19] Karabacak, Dwell Time and Average Dwell Time Methods Based on The
Cycle Ratio of The Switching Graph,Istanbul Technical University, Electronics and Communication Engineering Department,Maslak, Istanbul,
Turkey, 2013
[20] Jiqiang Wang, Two Characterizations of Switched Nonlinear Systems
with Average Dwell Time Province Key Laboratory of Aerospace Power
Systems, College of Energy and Power Engineering, Nanjing University
of Aeronautics and Astronautics, P.R.China, 2016
[21] Guisheng Zhai, Bo Hu, Kazunori Yasuda, Anthony N.Michel, Stability Analysis of Switched Systems with Stable and Unstable Subsystems: An Average Dwell Time Approach, Faculty of Systems Engineering, Wakayama University, Japan, Department of Electrical Engineering,
University of Notre Dame, Notre Dame,USA
190
[22] Weiming Xianga , Jian Xiaob, Stabilization of Switched ContinuousTime Systems with All Modes Unstable via Dwell-Time Switching, School
of Transportation and Logistics, Southwest Jiaotong University, China,
School of Electrical Engineering, Southwest Jiaotong University, China
[23] Erwin Kreyszig, Advanced Engineering Mathematics, Ohion State University, Columbus, Ohio, 10th edition, 2015
[24] A.Nayfeh, Perturbation Methods, WILEY-VCH Verlag GmbH and
Co.KGaA, Weinheim, 2004
[25] Anthony N.Michel, Stability of Dynamical Systems-Continuous, Discontinuous, and Discrete Systems, Department of Electrical Engineering,
University of Notre Dame, U.S.A, 2008
[26] Jin Lu and Lyndon J.Brown, A Multiple Lyapunov Functions Approach
for Stability of Switched Systems,American Control Conference Marriottn
Waterfront, Baltimore, MD, USA 2010
[27] Guang-Da Hu, Mingzhu Liu, The Weighted Logarithmic Matrix Normand Bounds of The Matrix Exponential, Department of Control Science
and Engineering, Harbin Institute of Technology, China, 2003
[28] Ranga Narayanan, Dietrich Schwabe, Interfacial Fluid Dynamics and
Transport Processes, University of Florida, USA , 2003
191
```