Delay Identification and Model Predictive Control

advertisement
Delay Identification and
Model Predictive Control
of Time Delayed Systems
Mu-Chiao Lu
Doctor of Philosophy
Department of Electrical and Computer Engineering
McGill University
Montreal,Quebec
September 2008
A Thesis Submitted to the Faculty of Graduate and Postdoctoral Studies in
Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
c
Mu-Chiao
Lu, 2008
DEDICATION
This thesis is dedicated to my parents, Wen-Hu Lu and Ying Tsai, who always
encourage and support me with their never failing love in any situation which I go
through.
ii
ACKNOWLEDGEMENTS
Many people assisted me throughout my graduate studies in McGill University. I sincerely appreciate their priceless help and support.
First and foremost, I express my hearty thanks to Professor Hannah Michalska. Without her precious guidance and encouragement, the completion of this
thesis would be impossible. She is not only a remarkable supervisor but also a
dynamic life coach. Her profound advice always motivates me to pursue a better
and balanced life.
I wish to express special acknowledgment to Professor Boulet and Professor
Ooi for their kind assistance and recommendation during my qualification exam
and the thesis advisory committee meeting.
I am also very thankful to the staffs of CIM center. In particular, many
thanks to Marlene, Cynthia, Jan and Patrick for their great help through various
administrative and operational hurdles.
I also wish to thank my colleagues Melita, Vladislav, Shahram, Razzk, and
Evgeni for their priceless technique support and their pleasant fellowship.
It would all have been lonely and unrelieved without the support from my
wife, Shu-Hua, who commits herself full-time to take care all the household. My
daughter, Immanuelle, even at her age of three, she knows how to encourage her
father like her mother does. My parents and brothers, Mu-Chen and Mu-Kun,
have always expressed their utmost faith in me and gave me any kind of assistance
to finish my goal of life.
During my doctoral studies, the financial difficulty would have overwhelmed
iii
my family and I. But, praise be to God the Almighty, He raised up His numerous
courageous men and women, Jerry, Hsin-Yi, Jessy, Mickie, Xiaoqin, Jane, Lynn,
Tien-Chi, HaiTao, Sun’s family, etc, to provide us all our need. Our trials were
very harrowing, but with the support and belief of all these fine women and men,
our faith in God and belief in the principles of truth and justice were reaffirmed.
Oh, the depth of the riches of the wisdom and knowledge of God!
How unsearchable His judgments, and His paths beyond tracing out!
”Who has known the mind of the Lord?”
Or who has been His counselor?
”Who has ever given to God,
that God should repay Him?”
For from Him and through Him and to Him are all things.
To Him be the glory forever! Amen.
Romans 11:33-36
iv
ABSTRACT
Two research problems involving the class of linear and nonlinear time delayed
systems are addressed in this thesis. The first problem concerns delay identification
in time delayed systems. The second problem concerns in the design of receding
horizon controllers of time delayed systems. Original solutions to both problems
are provided and their efficiency is assessed with examples and applications.
In this thesis, delay identification problem is tackled first. Steepest descent
and generalized Newton type delay identifiers are proposed. The receding horizon
control problems for delayed systems are extensively investigated next. For both
of linear and nonlinear time delayed systems, asymptotically stabilizing receding
control laws are delivered. Finally, to reduce the conservativeness caused by delay uncertainties, an adaptive receding horizon strategy which combines feedback
control with on-line delay identification is also discussed.
The thesis demonstrates the following: (1) Development of delay identifiers
which are independent of system parameter identification and robust with respect
to errors in the measured trajectory and exogenous input function. (2) Development of practical delay identifiers for linear and nonlinear time delayed systems for
reducing conservativeness of existing robust control designs. (3) Development of
model predictive control techniques for linear and nonlinear time delayed systems.
(4) Rigorous proofs of the asymptotic stability of the proposed model predictive
controllers. (5) Application of on-line estimation schemes to the proposed model
predictive controllers.
v
ABRÉGÉ
Cette thèse aborde deux problématiques de recherche relatives à la classe des
systèmes linéaires et non-linéaires avec retard. Le premier problème a trait à
l’identification des retards dans les systèmes avec retard. Le second problème consiste à concevoir des commandes d’horizon fuyant pour les systèmes avec retard.
Des solutions originales sont proposées pour ces deux problèmes et leur efficacité
est évaluée à l’aide d’exemples et d’applications.
Dans cette thèse, le problème de l’identification du retard est abordé premier.
La descente prononcée et les identificateurs du retard du type Newton généralisé
sont proposés. Les problèmes de commande d’horizon fuyant pour les systèmes
avec retard sont explorés Tant pour les systèmes avec retard linéaires que nonlinéaires, des règles de commandes asymptotiquement stabilisatrices pour les horizons fuyants sont proposées. Finalement, pour reduire le conservatisme untroduit
par l’incertitude du retard, une stratégie d’horizon fuyant adaptif, qui combine le
contrôle de retour avec le retard d’identification en ligne, est aussi discuté.
La thèse démontre les points suivants: (1) Développement d’identificateurs
de retard qui sont indépendants de l’identification des paramètres du système et
robustes à l’égard des erreurs de trajectoire mesurées et de fonctions d’entreés externes. (2) Développement d’identificateurs de retard pratiques pour les systèmes
avec retard linéaires et non-linéaires pour réduire le conservatisme de conception
des commandes robustes existantes. (3) Développement de techniques de commande prédictive pour les systèmes avec retard linéaires et non-linéaires. (4)
vi
Preuve rigoureuse de la stabilité asymptotique des commandes prédictives proposées. (5) Application du schéma d’estimation en ligne aux commandes prédictives
proposées.
vii
ORIGINALITY AND CONTRIBUTIONS
The work presented in the thesis has been carried out almost entirely by the
doctoral student. This includes the following theoretical contributions and applications.
• Development of two delay identifiers which are independent of system matrix
identification and robust with respect to errors in the measured trajectory
and exogenous input function; see [89], [90], [92], and [94] .
• Development of novel numerical techniques for constructing delay identifiers
and numerical approximation of the Fréchet derivative for linear and nonlinear time delayed systems; see [89], [90], [92] and [94].
• Development of a novel constructive procedure for the design of the receding
horizon control which is suitable for an arbitrary number of system delays.
The new condition involves fewer design variables; see [91] , [93], and [79].
• Development of a clear association between the stabilizability of the system and the existence of stabilizing receding horizon control law. It is also
proved rigorously that the receding horizon strategy guarantees global, uniformly asymptotic stabilization of the time delayed systems with an arbitrary
number of system delays; see [79] ,[91] and [93].
• Development of an adaptive receding horizon control scheme which possess
a degree of robustness with respect to perturbations in the delay values; see
[79].
viii
• Proposed delay identifiers are successfully applied to practical problems in
bioscience and engineering, such as the Glucose-Insulin regulatory systems,
the intravenous antibiotic treatment for AIDS patients, river pollution control systems, multi-compartment transport systems, and a pendulum with
delayed damping; see [94].
ix
LIST OF PUBLICATIONS
1. [79] M.C. Lu and H. Michalsks, Adaptive Receding Horizon Control of Time
Delayed Systems, to be submitted to Automatica.
2. [94] H. Michalska and M.C. Lu, Delay Identification in Nonlinear Time Delayed Systems, submitted to IEEE Transactions on Automatic Control.
3. [90] H. Michalska and M.C. Lu, Delay Identification in Nonlinear Differential
Difference Systems, IEEE 45th Conference on Decision and Control, pages
2553-2558, San Diego, U.S.A., December 2006.
4. [91] H. Michalska and M.C. Lu, Design of the Receding Horizon Stabilizing
Feedback Control for Systems with Multiple Time Delays, WSEAS Transactions on Systems, Issue 10, Vol. 5, pages 2277-2284, 2006.
5. [93] H. Michalska and M.C. Lu, Receding Horizon Control of Differential
Difference Systems with Multiple Delays Parameters, Proceedings of the 10th
WSEAS International Conference on Systems, Vouliagmeni, Athens, Greece,
pages 277-282, July 2006.
6. [92] H. Michalska and M.C. Lu, Gradient and Generalized Newton Algorithm
for Delay Identification in Linear Hereditary Systems, WSEAS Transactions
on Systems, Issue 5, Vol. 5, pages 905-912, 2006.
7. [89] H. Michalska and M.C. Lu, Delay Identification in Linear Differential
Difference Systems, Proceedings of the 8th WSEAS International Conference
on Automatic Control, Modeling, and Simulation, Praque, Czech, pages 297304, March 2006.
x
TABLE OF CONTENTS
DEDICATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ii
ACKNOWLEDGEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . .
iii
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v
ABRÉGÉ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vi
ORIGINALITY AND CONTRIBUTIONS . . . . . . . . . . . . . . . . . . . viii
LIST OF PUBLICATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . .
x
LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
LIST OF SYMBOLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1
1.2
1.3
1.4
2
Historical notes . . . . . . . . . . . . . . . . . . . . . .
Motivation for the research reported in this thesis . . .
1.2.1 Delay identification in time delayed systems . . .
1.2.2 Model predictive control of time delayed systems
Original research contributions of the thesis . . . . . .
Outline of the thesis . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
. 1
. 4
. 4
. 6
. 8
. 10
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1
2.2
2.3
2.4
2.5
Introduction . . . . . . . . . . . . . . . . . . . . . . .
Time delayed systems . . . . . . . . . . . . . . . . . .
2.2.1 Existence and uniqueness of solution . . . . . .
The case of linear and nonlinear time delayed systems
2.3.1 Linear time delayed systems . . . . . . . . . .
2.3.2 Nonlinear time delayed systems . . . . . . . .
Delay identifiability . . . . . . . . . . . . . . . . . . .
Asymptotical stability . . . . . . . . . . . . . . . . .
xi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
12
12
14
17
17
18
19
19
3
Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1
3.2
4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
23
24
26
26
27
28
28
29
31
Problem statement and notation .
Identifier design . . . . . . . . . .
Convergence analysis for the delay
Numerical examples . . . . . . . .
4.4.1 Example 1:[8] . . . . . . .
4.4.2 Example 2:[132] . . . . . .
. . . . . .
. . . . . .
identifier
. . . . . .
. . . . . .
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
32
34
47
52
53
54
Delay Identification in Nonlinear Delay Differential Systems[90, 94] . . 56
5.1
5.2
5.3
5.4
5.5
6
.
.
.
.
.
.
.
.
.
Delay Identification in Linear Delay Differential Systems[89, 92, 94] . . 32
4.1
4.2
4.3
4.4
5
Delay identification in time delayed systems . . .
3.1.1 Approximation approach . . . . . . . . . .
3.1.2 Spectral approach . . . . . . . . . . . . . .
3.1.3 ”Multi-delay” approach . . . . . . . . . . .
3.1.4 The approach of Kolmanovskii & Myshkis .
3.1.5 Variable structure approach . . . . . . . .
Model predictive control of time delayed systems .
3.2.1 MPC of linear time delayed systems . . . .
3.2.2 MPC of nonlinear time delayed systems . .
Problem statement and notation . . . . . . .
Identifier design . . . . . . . . . . . . . . . .
Convergence analysis for the delay identifier
Numerical techniques and examples . . . . .
5.4.1 Computational technique . . . . . . .
Numerical examples and discussion . . . . .
5.5.1 Numerical examples in Bioscience . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
57
59
72
76
77
82
85
Model Predictive Control of Linear Time Delayed Systems[93, 91] . . . 98
6.1
6.2
6.3
6.4
6.5
6.6
6.7
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Problem statement and notation . . . . . . . . . . . . . . . . . .
Sufficient conditions for successful control design . . . . . . . . .
6.3.1 Construction procedure for the receding horizon terminal
cost penalties . . . . . . . . . . . . . . . . . . . . . . .
Stabilizing property of the receding horizon control law . . . . .
Computation of the receding horizon control law . . . . . . . . .
Sensitivity of the RHC law with respect to perturbations in the
delay values . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Numerical example . . . . . . . . . . . . . . . . . . . . . . . . .
xii
98
99
101
106
106
113
118
120
7
Model Predictive Control of Nonlinear Time Delayed Systems with
On-Line Delay Identification[79] . . . . . . . . . . . . . . . . . . . . 123
7.1
7.2
7.3
7.4
7.5
7.6
8
Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . 124
Monotonicity of the optimal value function . . . . . . . . . . . . 125
Stability of the RHC . . . . . . . . . . . . . . . . . . . . . . . . 129
Feasible Solution to a particular type of nonlinear time delayed
systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
RHC with online delay identification . . . . . . . . . . . . . . . 136
Numerical example . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.6.1 Examples of RHC for a special type of time delayed systems139
7.6.2 RHC of time delayed systems with on-line delay identification . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
8.1
8.2
Summary of research . . . . . . . . . . . . . . . . . . . . . . . . 145
Future research avenues . . . . . . . . . . . . . . . . . . . . . . . 148
8.2.1 State-dependent and time-varying delays and other
paramters in time delayed systems . . . . . . . . . . . . 148
8.2.2 Identification of measurement delays and input delays . . 149
8.2.3 Receding horizon control of general nonlinear time delayed systems . . . . . . . . . . . . . . . . . . . . . . . 149
8.2.4 Adaptive receding horizon control for time delayed systems150
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
xiii
Table
LIST OF TABLES
4–1 Parametric values of Example 1
page
. . . . . . . . . . . . . . . . . . . . 53
4–2 Parametric value of Example 2 . . . . . . . . . . . . . . . . . . . . . 55
5–1 Parameter values in Example 5.5.1 . . . . . . . . . . . . . . . . . . . 83
5–2 Parameter value in Example 5.5.2 . . . . . . . . . . . . . . . . . . . 84
5–3 Parameter values in Example 5.5.3
. . . . . . . . . . . . . . . . . . 86
5–4 Parameters in Example 5.5.4 . . . . . . . . . . . . . . . . . . . . . . 88
5–5 Parameter values of Case 1.1 in Example 5.5.4 . . . . . . . . . . . . 90
5–6 Parameters in Example 5.5.5 . . . . . . . . . . . . . . . . . . . . . . 92
5–7 Parameter values for Ccse 2.1 in Exampe 5.5.5 . . . . . . . . . . . . 94
5–8 Parameter values for Case 2.3 in Example 5.5.5 . . . . . . . . . . . . 95
xiv
Figure
LIST OF FIGURES
page
1–1 Difference between the MPC and PID control (Modified from [22]) .
7
1–2 MPC strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
1–3 Basic structure of MPC (Modified from [22]) . . . . . . . . . . . . .
9
4–1 The generalized Newton type algorithm for delay identification . . . 52
4–2 Delay identification in Example 1 with true values τ̂1 =1, τ̂2 =2,
and initial values τ10 = 1.3, τ20 = 1.7 . . . . . . . . . . . . . . . . 54
4–3 Delay identification with true values τ̂1 =1, τ̂2 =2, and initial values
τ10 = 1.55, τ20 = 1.45 . . . . . . . . . . . . . . . . . . . . . . . . . 55
5–1 Delay Identification for the Example 5.5.1 τ̂ =2 and initial delay τ 0
=2.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5–2 Delay Identification for the Example 5.5.2 τ̂1 =1, τ̂2 =2 and initial
delays τ10 =0.5, τ20 =2.5 . . . . . . . . . . . . . . . . . . . . . . . 85
5–3 Delay Identification for the Example 5.5.3 with true values τ̂1 =1,
τ̂2 =2 and initial values τ10 =1.4, τ20 =2.2 . . . . . . . . . . . . . . 86
5–4 Delay identification for the disease dynamics of Haemophilus influenzae with the true delay τ̂ =48 and initial delays τ 0 =40 and
τ 0 =56 with constant step size α =0.75. . . . . . . . . . . . . . . 89
5–5 Delay identification for the disease dynamics of Haemophilus influenzae with the true delay τ̂ =48 and initial guess delays τ 0
=40 and τ 0 =56 with step size α =0.25. . . . . . . . . . . . . . . 90
5–6 Delay identification for the disease dynamics of Haemophilus influenzae with the true delay τ̂ =48 and initial delay τ 0 =40 with
α =0.25 and 2.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
xv
5–7 Delay identification for the Glucose-Insulin regulatory system with
the true delays τ̂1 = 7, τ̂2 = 12, initial guess delays τ10 = 10,
τ20 = 9, and step size α =0.75. . . . . . . . . . . . . . . . . . . . . 93
5–8 Delay identification for the Glucose-Insulin regulatory system with
the true delays τ̂1 = 7 and τ̂2 = 12 and initial guess delays τ10 = 10
and τ20 = 9 with step size α =0.25. . . . . . . . . . . . . . . . . . 94
5–9 Delay identification for the Glucose-Insulin regulatory system with
the true delays τ̂1 = 7, τ̂2 = 36, initial guess delays τ10 = 8,
τ20 = 26, and step size α =0.75. . . . . . . . . . . . . . . . . . . . 95
5–10 Delay identification for the Glucose-Insulin regulatory system with
true delays τ̂1 = 20, τ̂2 = 25 ,initial guess delays τ10 = 19, 17, 15.5,
τ20 = 18, and step size α =0.75. . . . . . . . . . . . . . . . . . . . 96
5–11 Delay identification for the Glucose-Insulin regulatory system with
the true delays τ̂1 = 20, τ̂2 = 25, initial guess delays τ10 = 17,
τ20 = 18, and step size α =0.75, 0.5, and 0.25. . . . . . . . . . . . 97
6–1 State trajectories of the closed loop system in Example 6.1 . . . . . 121
6–2 State trajectories of the closed loop system in Example 6.2 . . . . . 122
7–1 Block diagram of the RHC with on-line delay identification . . . . . 137
7–2 State trajectories (RHC: solid line, u = Kx: dotted line) . . . . . . 140
7–3 Control trajectories (RHC: solid line, u = Kx: dotted line) . . . . . 140
7–4 State trajectories of RHC . . . . . . . . . . . . . . . . . . . . . . . . 142
7–5 Control trajectory of RHC . . . . . . . . . . . . . . . . . . . . . . . 142
7–6 Comparison of actual and estimated models . . . . . . . . . . . . . 143
7–7 RHC closed-loop state trajectories of actual and estimated models . 144
xvi
LIST OF SYMBOLS
Rn
n-dimensional Euclidean space
C([a, b], Rn
Banach space of continuous vector functions f : [a, b] → Rn
C 1 ([a, b], Rn
the class of continuous differentiable vector functions on [a, b]
< ∙|∙ >
scalar product
k∙k
norm
k ∙ kC
norm kf kC , sups∈[a,b] kf (s)k
L2 ([a, b], Rn )
Hilbert space of Lebesgue square integrable vector functions
< f1 |f2 >2
inner product < f1 |f2 >,
k ∙ k2
2-norm
L1 ([a, b], Rn )
Banach space of absolute integrable functions on [a, b]
kf k1
norm kf k1 ,
x(t)
state vector at time t, x(t) ∈ Rn
x(t − τ )
delayed state vector at time t, x(t − τ ) ∈ Rn
τ
constant delay parameter vector, τ , [τ1 , ..., τk ]
τ̂
nominal system delay parameter vector, τ̂ , [τ̂1 , ..., τ̂k ]
τk
constant time delay in the system, τk > 0
Y(τ )∗
Hilbert adjoint of the operator Y (τ )
gradτ Ψ(τ, x̂)
gradient of the cost functional Ψ(τ, x̂)
Rb
a
Rb
a
f1T (s)f2 (s)ds
kf (s)k1 ds
xvii
CHAPTER 1
Introduction
Model predictive control (MPC) is becoming increasingly popular in industrial process control where delays are almost inherent in the systems. However,
an accurate appropriate model of the process is required to ensure the benefits of
MPC. Furthermore, perturbations of delay parameters may induce complex behaviours (oscillations and instabilities) of the closed loop system. Hence, delay
identification is highly needed in MPC for time delayed systems (TDS). Therefore, the main issues addressed in this thesis are delay identification and MPC of
time delayed systems.
This chapter highlights the importance of delay identification and MPC in
time delayed systems and motivates the research approaches retained in this thesis.
The evolution of time delayed systems is discussed in Section 1.1. The motivation
for the research and the background relevant to delay identification and MPC in
TDS are then summarized in Section 1.2. The original contributions of this thesis
are listed in Section 1.3.
1.1
Historical notes
Time delayed systems (TDS), which are described by differential difference
equations (DDEs) or functional differential equations (FDEs), are also known as
delay differential systems, systems with aftereffects, hereditary systems, time-lag
systems, or systems with deviating arguments. Although the first FDEs were
1
studied by Euler, Bernoulli, Lagrange, Laplace, Poisson, and others, in the 18th
century, within the solution of various geometric problems[41], FDEs were investigated infrequently before the beginning of the 20th century. The situation changed
radically in the 1930s and 1940s [56]. At that time, a number of important scientific and technical problems were modelled by FDEs. The first problems of this
type were considered by Volterra (viscoelasticity in 1909 and predator-prey models
in 1928-1931), Kostyzin (mathematical biology problems in 1934), Minorsky (ship
stabilization problem in 1942). The appearance of such practically important
problems stimulated the interest in studying these not very well-known equations.
Stability of time delayed systems grew into a formal subject of study in the 1940s,
with contributions from such towering figures as Pontryagin and Bellman [40].
The rapid growth of the theory of the FDEs began in the 1950s. In 1949, for the
first time, Myshkis correctly formulated the initial-value problem and introduced a
general class of linear retarded FDEs. Krasovskii extended the Lyapunov’s theory
to time delayed systems in 1956. Razumikhin proposed yet another method for
Lyapunov stability analysis[111]. Further historic information can be found in the
book of Kolmanovskii and Nosov [56]. There are many books covering different
aspects of time delayed systems, such as Kolmanovskii and Nosov [56], El’sgol’ts
and Norkin [34], Diekmann el al. [31], Malek-Zavarei and Jamshidi [85], Górecki
et al. [39], and Hale and Verduyn Lunel [45, 44]. Some additional related topics
are covered by a number of recent books. The book by Niculescu [98] has a much
wider scope in considering stability. With more general setting of infinite delays,
Klomanovskii and Myshkis [55] deal with the control and estimation in hereditary
systems in their book. The detailed explanation of the main methods of exact
2
and approximate solution of problems of optimal control and estimation for deterministic and stochastic systems with aftereffect can be found in the work by
Kolmanovskii and Shaikhet [57]. The book by MacDonald [82] provides a guide for
the stability analysis of biological mathematical models with delays. Boukas and
Liu [18] tackle both the deterministic and stochastic time delayed systems. The
dynamics of controlled mechanical systems with delayed feedback are investigated
in the book by Hu and Wang [47]. Time-optimal control algorithms of hereditary
systems are applied to the economic dynamics of the US by Chukwu [27]. The
book by Kuang [62] investigates the use of delay differential equations in modeling
population dynamics. Recently, centered on computability, robust stability and
robust control, Gu, et al. [40] examine the stability of TDS both in frequency
domain and time domain.
Since many practical systems such as thermal processes, chemical processes,
biological processes and metallugical processes etc., have inherent time delays, the
problem of identifying the delays in such a system is of great importance. For example, the identification algorithm developed could be very helpful in improving
robust performance of model predictive control schemes for time delayed systems.
There are many applications of model predictive control which include not only
control problems in the process industry, but also in applications to control of
a diversity of processes ranging from robotic manipulators to biological control
(e.g. control of clinical anaesthesia) [22, 64]. Good performance of these applications shows the capacity of model predictive control to achieve highly efficient
control [83, 116]. Investigation of delay identification problems and model predictive control of time delayed systems is hence well motivated. As the identification
algorithms are predominantly developed to serve the needs of MPC, general notes
3
explaining the delay identification and MPC approach are presented next.
1.2
Motivation for the research reported in this thesis
The research efforts summarized in this thesis concern the development of new
results related to model predictive control of time delayed systems. Two major
avenues are considered. The first one concerns delay identification of time delayed
systems. It leads to construct a delay identifier which is robust with respect to
errors in the measured trajectory and exogenous input function. It is helpful in
reducing conservativeness of existing robust control designs in model predictive
control of time delayed systems such as the one in [70]. The second research avenue is to provide a simple constructive method for the design of the optimal cost
function in the receding horizon control for time delayed systems.
1.2.1
Delay identification in time delayed systems
Time delays occur in many important classes of system models; time delayed
systems have thus attracted considerable research interest. While controllability,
observability, and control design approaches to such systems received considerable
attention, system identification is an area less developed. Recent advances in
model reference adaptive control, [17], and model predictive control of linear time
delayed systems, [70], are likely to change this trend.
Recent results in the area of identification of time delayed systems pertain
to identifiability conditions for linear and nonlinear systems; see [8],[13],[97],[127].
The concept of identifiability is based on the comparison of the original system
(system to be identified) and its associated reference model system (real system).
For illustration, the original system is here described by differential difference
4
equations of the form:
d
x(t) = f (x(t), x(t − τ1 ), ..., x(t − τk ), u(t))
dt
−τk ≤ s ≤ 0
x(s) = φ(s),
(1.1)
(1.2)
where 0 < τ1 < τ2 < ... < τk are time delays to be identified. With the system
model (1.1), let the real system be represented by
d
x̂(t) = f (x̂(t), x̂(t − τ̂1 ), ..., x̂(t − τ̂k ), u(t))
dt
(1.3)
with 0 < τ̂1 < τ̂2 < ... < τ̂k , and be equipped with the same initial condition
−τ̂k ≤ s ≤ 0
x̂(s) = φ(s),
(1.4)
System (1.1) is therefore said to be identifiable if there exists a system input function u such that the identity x(t) = x̂(t), for all t ≥ 0 implies that τ1 = τ̂1 , τ2 = τ̂2 ,
. . . , τk = τ̂k .
The delay effect on the stability of systems is a problem of recurring interest
since delays may induce complex behaviours (oscillations, instability, bad performances) for the (closed-loop) schemes [59, 58]. For the purpose of stability
analysis, it is known that necessary and sufficient conditions can be derived in the
case of a known constant delay τ [45, 56]. If the value τ is not available, then
the delay estimation (and variation) probably constitutes the greatest challenge in
applications.
5
1.2.2
Model predictive control of time delayed systems
Model predictive control (MPC), also known as receding horizon control (RHC),
attracts considerable research attention because of its unparalleled advantages.
These include:
• Applicability to a broad class of systems and industrial applications.
• Computational feasibility.
• Systematic approach to obtain a closed loop control and guaranteed stability.
• Ability to handle hard constraints on the control as well as the system states.
• Good tracking performance.
• Robustness with respect to system modeling uncertainty as well as external
disturbances.
The MPC strategy performs the optimization of a performance index with
respect to some future control sequence, using predictions of the output signal
based on a process model, coping with amplitude constraints on inputs, outputs
and states. For a quick comparison of MPC and the traditional control scheme,
such as PID, Figure 1-1 shows the difference between the MPC and PID control
schemes in which ”anticipating the future” is desirable while a PID controller only
has the capacity of reacting to the past behaviors. The MPC algorithm is very
similar to the control strategy used in driving a car [22].
At the current time k, the driver knows the desired reference trajectory for a
finite control horizon, say [k, k + N ], and by taking into account the car characteristics to decide which control actions (accelerator, brakes, and steering) to take
in order to follow the desired trajectory. Only the first control action is adopted
as the current control law, and the procedure is then repeated over the next time
horizon, say [k + 1, k + 1 + N ]. The term ”receding horizon” is introduced, since
6
k
k+
Figure 1–1: Difference between the MPC and PID control (Modified from [22])
the horizon recedes as time proceeds. The basic MPC strategy is shown in Figure
1-2.
Figure 1-3 presents the basic structure of MPC. A model is used to predict the
future plant outputs, based on the past and present values and on the proposed
optimal future control actions. These actions are calculated by the optimizer while
taking into account the cost function as well as the constraints. The process model
must be capable of capturing the process dynamics that means the model must
precisely predict the future outputs. This brings out the identification problem for
the process model.
While the body of work of MPC concerning delay-free systems is now extensive, see [87] for a comprehensive survey of previous contributions, much fewer
7
past
future
target
output
y (k )
Manipulated variable
u (k )
k-1 k
yˆ(k p | k)
k+1
u(k p | k)
k+p
N : prediction horizon
k+
N
Figure 1–2: MPC strategy
results pertain to time delayed systems. As the MPC approach is becoming increasingly popular in the process industry, where delays are almost inherent, further
research is well motivated in this thesis.
1.3
Original research contributions of the thesis
The objective of this research is to improve robustness with respect to delay
perturbations for receding horizon control in linear and nonlinear time delayed
systems. To fulfill this objective, delay identification in time delayed systems,
receding horizon control of time delayed systems, and receding horizon control
with on-line delay identification in time delayed systems are investigated.
In this research, delay identification problem is tackled first. A generalized
Newton type delay identifier is proposed for linear and nonlinear time delayed
8
Reference
Trajectory
Past and
Current values
Model
Predicted
Output
-
+
Future
Control
Signals
Optimizer
Cost
Function
Future Errors
Constraints
Figure 1–3: Basic structure of MPC (Modified from [22])
systems. Then we investigate the receding horizon control problems for linear
and nonlinear time delayed systems. A globally, uniformly, and asymptotically
stabilizing receding control law is delivered for linear multiple delayed systems. For
nonlinear time delayed systems, we present a sub-optimal approach to obtain the
control law. Finally, to reduce the conservativeness caused by delay uncertainties,
an adaptive receding horizon strategy which combines feedback control with online delay identification is also discussed.
The original contributions of this research are:
9
• Development of a delay identifier which is independent of system matrix
identification and robust with respect to errors in the measured trajectory
and exogenous input function.
• Development of practical delay identifiers for linear and nonlinear delay differential systems for reducing conservativeness of existing robust control designs such as the one of [70].
• Development of model predictive control techniques for linear and nonlinear
delay differential systems.
• Rigorous analysis of the MPC law to ensure globally, uniformly asymptotic
stabilization of the delay differential systems in closed loop.
• Application of the proposed approach to pratical problems in bioscience,
such as the Glucose-Insulin regulatory systems[78, 77] and the intravenous
antibiotic treatment for AIDS patients[117].
1.4
Outline of the thesis
The thesis is organized as follows. The first chapter introduces the subjects
of this research, the notation, and the research goals. Chapter 2 contains the basic concepts used in time delayed systems and the fundamental theory for delay
identifiability and model predictive control for time delayed systems. Chapter 3
reviews the literatures and recent results for delay identification and model predictive control for linear and nonlinear delay differential systems. The first problem
statement addressed in this thesis with its solutions which concerns delay identification for linear delay differential systems is defined in Chapter 4. In Chapter
5, the approach proposed in the previous chapter is extended to nonlinear delay
differential systems. Chapter 6 discusses model predictive control for linear delay
10
differential systems. The MPC approach to nonlinear delay differential systems is
derived and an adaptive MPC scheme is also presented in Chapter 7. Concluding
remarks and summary are found in Chapter 8.
11
CHAPTER 2
Background
2.1
Introduction
In this chapter we introduce the notion of time delayed systems. The prop-
erties and the concepts of delay identification and stability are rigorously discussed.
2.2
Time delayed systems
Time delayed systems are often described by functional differential equations
(FDEs). Many of the applications can be modeled by a special class of FDEs,
namely differential difference equations (DDEs) in which the change of the state
does not solely depend on the value of the state variable at the present time, but
also on the values of the state variable at times in the past. Such equations in
explicit form are, in the nonlinear case, described by:
d
x(t) = f (t, x(t), x(t − τ ))
dt
(2.1)
−τ ≤ s ≤ 0
(2.2)
with the initial condition
x(s) = φ(s),
where x(t) ∈ Rn is the state variable, φ(s) is continuous on [−τ, 0] and τ > 0
is a fixed, discrete delay. We let xt ∈ C be assumed to be a function defined
by xt (θ) = xt (t + θ), −τ ≤ θ ≤ 0. Here, we adopt the standard notation, see
also List of Symbols, according to which, Rn denotes the n-dimensional Euclidean
12
space with scalar product < ∙ | ∙ > and norm k ∙ k and C([a, b], Rn ) is the Banach
space of continuous vector functions f : [a, b] → Rn with the usual norm k ∙ kC
Δ
defined by k f kC = sups∈[a,b] k f (s) k. Without attempting to make a complete
classification, we now give an overview of other types of DDEs. A nonlinear system
with multiple delays in explicit form is described by
d
x(t) = f (x(t), x(t − τ1 ), ..., x(t − τk ))
dt
(2.3)
where 0 < τ1 < τ2 < ... < τk .
A system with distributed delay is described by
d
x(t) = f (t,
dt
Z
d
x(t) = f (t,
dt
Z
or, respectively, by
0
g(x(t + s))ds),
(2.4)
g(x(t + s))ds),
(2.5)
−τ
0
−∞
The delay can be state-dependent yielding
d
x(t) = f (t, x(t), x(t − τ (t, x(t))))
dt
(2.6)
The above equations are all of retarded type. If there is also a dependency of the
right hand side on the derivative of the state, e.g. if the system representation is
d
x(t) = f (t, x(t), x(t − τ ), ẋ(t − τ ))
dt
(2.7)
then the equation is called neutral type. Several combinations of the above types
are also possible. All the above equations can be put in the following general form
d
x(t) = f (t, xt )
dt
13
(2.8)
where the right hand side depends on the profile xt : R → Rn and f is a general
operator mapping into Rn . This class of so-called functional differential equations
[45, 55] is so broad that very little can be said about its solutions in general. Hence,
in this thesis, we will restrict attention to equations with one or multiple, fixed,
discrete delays as in (2.1) or (2.3).
2.2.1
Existence and uniqueness of solution
For completeness of the thesis, we cite a basic existence theorem for the initialvalue problem of (2.1) and (2.2) together with its proof which gives insight into
further development of the thesis. More detailed information can be found in
[44, 45, 47].
We start with,
Definition 2.2.1 [47]:
The function x : R → Rn is a solution of the differential difference equation
(2.1) with the initial condition (2.2) if there exists δ > 0 such that x ∈ C([t0 −
τ, t0 + δ), Rn ), (t, xt ) ∈ R × C and x(t) satisfies Equation (2.1) for all t ∈
C([t0 − τ, t0 + δ), Rn )
Further, two sets need to be defined
J ≡ [t̂, +∞],
D ≡ {x ∈ Rn | k x k< d}.
The existence theorem quoted below is re-cited from [47], p.28-32.
Theorem 2.2.1. [47] Assume that
(a) f (t, x(t), x(t − τ )) is continuous in J × D2 ;
14
(2.9)
(2.10)
(b) f (t, x(t), x(t − τ )) is of local Lipschitz with respect to x(t) and x(t − τ ), namely,
there is a constant LG > 0 for G ⊆ J ×D2 such that for any (t, ξ1 , ξ2 ) and (t, η1 , η2 )
the following inequality holds
k f (t, ξ1 , ξ2 ) − f (t, η1 , η2 ) k ≤ LG
2
X
j=1
k ξj − ηj k
(2.11)
Then, there exists a constant δ > 0 such that Equation (2.1) with condition (2.2)
has a unique continuous solution x(t, t0 , φ) for t ∈ [t0 − τ, t0 + δ].
Proof. For a given initial function φ, we denote D1 ≡ {ψ| k ψ − φ kC < d1 },
Ω ≡ [t0 , t0 + δ] × D12 , and choose δ > 0, d1 ∈ (0, d) such that Ω ⊆ J × D2 .
Furthermore, let x0 (t) ∈ D be a continuous function defined by



φ(t),
t ∈ [t0 − τ, t0 ],
x0 (t) ≡


φ(t0 ),
t > t0
and then define xk (t) for k ≥ 1 recurrently by



φ(t),
t ∈ [t0 − τ, t0 ],
xk (t) ≡
R


φ(t0 ) + t f (s, xk−1 (s), xk−1 (s − τ ))ds,
t0
(2.12)
(2.13)
t > t0
If xk−1 (t) ∈ D, M ≡ supΩ k f k and d0 ≡k φ(t0 ) k enable one to write
k xk+1 (t) − xk (t) k ≤ L
Z
t
t0
[k xk (s) − xk−1 (s) k + k xk (s) − xk−1 (s − τ ) k] ds,
≤ d0 + M |t − t0 |
≤ d0 + M δ.
(2.14)
15
Thus, xk (t) ∈ D holds if δ < (d − d0 )/2M . As a result, xk (t) ∈ D holds for all
k ≥ 1. Now, xk (t) converges uniformly on [t0 − τ, t0 + δ] as k → +∞. In fact, for
t ∈ [t0 , t0 + δ], we have
k xk+1 (t) − xk (t) k ≤ L
Z
≤ 2L
t
t0
Z
[k xk (s) − xk−1 (s) k + k xk (s − τ ) − xk−1 (s − τ )) k]ds,
t
t0
k xk (s) − xk−1 (s) k ds
(2.15)
where L is the Lipschitz constant of f (t, x(t), x(t − τ )) over Ω. Because xk (t) −
xk−1 (t) ≡ 0 holds for all t ∈ [t0 − τ, t0 ], the above inequality is true for all t ∈
[t0 − τ, t0 + δ]. Using the inequality
k x1 (t) − x0 (t) k ≤ M |t − t0 |,
t ∈ [t0 − τ, t0 + δ]
(2.16)
gives
k xk (t) − xk−1 (t) k ≤
M (2L)k−1 |t − t0 |k
,
k!
t ∈ [t0 − τ, t0 + δ], k ≥ 1
(2.17)
This implies that xk (t) converges to a function x(t) ≡ x(t, t0 , φ) uniformly on
[t0 − τ, t0 + δ] as k → +∞. Imposing k → ∞ on both sides of Equation (2.18)
gives
x(t) ≡



φ(t),
t ∈ [t0 − τ, t0 ],
R


φ(t0 ) + t f (s, x(s), x(s − τ ))ds,
t0
(2.18)
t > t0
To prove the uniqueness, it is assumed on contrary that there is another solution
y(t) = y(t, t0 , φ) of Equation (2.1) with condition (2.2) on the interval [t0 −τ, t0 +δ]
with δe > 0. As done in the above part, we have
k xk+1 (t) − y(t) k≤ 2L
Z
t
t0
k xk (s) − y(s) k ds, [t0 − τ, t0 + min(δ, δ̃)]
16
(2.19)
and
k x0 (t) − y(t) k ≤ M |t − t0 |,
t ∈ [t0 − τ, t0 + min(δ, δ̃)]
(2.20)
There follows
k xk (t)−y(t) k ≤
M (2L)k−1 |t − t0 |k+1
,
(k + 1)!
t ∈ [t0 −τ, t0 +min(δ, δ̃)], k ≥ 0 (2.21)
Equation (2.21) implies that k xk (t)−y(t) k→ 0 when k → +∞. Hence, x(t) ≡ y(t)
holds for all t ∈ [t0 − τ, t0 + min(δ, δ̃)]. This completes the proof.
2.3
The case of linear and nonlinear time delayed systems
In this section, we defined two classes of time delayed systems as considered
in this thesis.
2.3.1
Linear time delayed systems
The class of linear time delayed systems considered in this thesis is restricted
to systems whose models are given in terms of differential difference equations of
the form
d
x(t) = Ax(t) + Σki=1 Ai x(t − τi ) + u(t)
dt
(2.22)
where x(t) ∈ Rn , u(t) ∈ Rn is a continuous function that represents an exogenous
input, 0 < τ1 < τ2 < ... < τk are time delays, and the system matrices A, Ai ∈
Rn × Rn , i = 1, ..., k are constant. Let any continuously differentiable function
φ ∈ C 1 ((−∞, 0), Rn ) which satisfies limt→0− φ̇(t) = φ̇(0−) serve as the initial
condition for system (2.22), so that
−τk ≤ s ≤ 0
x(s) = φ(s),
17
(2.23)
2.3.2
Nonlinear time delayed systems
The class of nonlinear time delayed systems considered in this thesis is restricted to systems whose models are given in terms of differential difference equations of the form
d
x(t) = f (x(t), x(t − τ1 ), ..., x(t − τk ), u(t))
dt
(2.24)
where x(t) ∈ Rn , u(t) ∈ Rm is a continuous and uniformly bounded function that
represents an exogenous input, and 0 < τ1 < τ2 < ... < τk are time delays. The
following assumption is made about the function f on the right hand side of the
system equation (2.24):
[A1] The function f : Rn × ... × Rn × Rm → Rn is continuously differentiable
0
and the partial derivatives f|10 , ..., f|k+2
with respect to all the k + 2 vector
arguments of f , are uniformly bounded, i.e. there exists a constant M > 0
such that
||f|i0 (x0 , x1 , ..., xk , u)|| ≤ M, i = 1, ..., k + 2
(2.25)
for all (x0 , x1 , ..., xk , u) ∈ Rn × ... × Rn × Rm .
Let a continuously differentiable function φ ∈ C 1 ((−∞, 0), Rn ), satisfying limt→0− φ̇(t) =
φ̇(0−), serve as the initial condition for system (2.24), so that
−τk ≤ s ≤ 0
x(s) = φ(s),
(2.26)
Under the above conditions, there exists a unique solution of (2.24) defined on
[−τk , +∞) that coincides with φ on the interval [−τk , 0].
18
2.4
Delay identifiability
In this section, we define the notion of delay identifiability for the identifica-
tion problems discussed in this thesis.
Definition 2.4.1:
System (2.22) or (2.24) is said to be locally identifiable if for a given observation
time T > 0 there exists a neighborhood, B(τ̂ ; r), r > 0, of the nominal system
Δ
delay parameter vector, τ̂ = [τ̂1 , ..., τ̂k ], and a system input function u such that
the identity
x(t) = x̂(t)
for t ∈ [−τˆk , T ]
(2.27)
implies that τ = τ̂ , or else that τ ∈
/ B(τ̂ ; r), regardless of the initial function φ.
Remark 2.4.1. From a practical point of view, and according to the autonomous
case considered in [97], equality x(t) = x̂(t), t ≥ 0 can be restricted to an arbitrary
interval [t0 , t0 + nτ ].
Remark 2.4.2. Delay identifiability of system (2.22) or (2.24) can be associated
with a certain type of controllability. See [13], where such conditions have been
provided.
2.5
Asymptotical stability
Asymptotical stability is primordial in the discussion of the problems of the
receding horizon control. The definitions follow:
19
Definition 2.5.1: Stable delayed systems
Consider the functional differential equation (2.8) and assume that f satisfies
f (t, 0) = 0 for all t ∈ R. The solution x(t) = 0 of equation (2.8) is said to be
stable if for any σ ∈ R, > 0, there is a δ = δ(, σ) such that φ ∈ B(0, δ) implies
xt (σ, φ) ∈ B(0, ) for t ≥ σ.
Definition 2.5.2: Asymptotically stable delayed systems
The solution x(t) = 0 of equation (2.8) is said to be asymptotically stable if it is
stable and there is a b0 = b0 (σ) > 0 such that φ ∈ B(0, b0 ) implies x(σ, φ)(t) → 0
as t → ∞.
20
CHAPTER 3
Literature Survey
Many works have been devoted to the analysis and the control of time delayed
systems; see [4, 41, 55, 114] and references therein. [114] illustrated the probable
reasons of the continuous interest in this type of systems.
• Time delay is inevitable in practical application: In practice, engineers need
their models to behave more like real processes. Many processes include time
delay phenomena in their dynamics. They arise either as a result of inherent
delays in the system or as a introduction of delay into the system for control
purposes. To name a few, the monographs [27, 55, 62] and [98] give examples
in economics, population dynamics, biology, chemistry, mechanics, viscoelasticity, physics, physiology, electronics, as well as in engineering science. They
correspond to transport time or to computational times [85]. In addition,
actuators, sensors, field networks that are involved in feedback loops usually
introduce such delays. Thus, they are strongly involved in challenging areas
of communication and information technologies: stability of network controlled systems [21, 100], high-speed communication networks [15, 86, 118],
teleoperated systems [99], computing times in robotics [3], etc.
• Time delayed systems are still resistant to many classical control approaches.
One could think that the simplest approach would consist in replacing the
21
system equation by its finite-dimensional approximation. Unfortunately, ignoring effects that are adequately represented by functional differential equations is not an alternative because it can lead to wrong outcomes in terms
of stability analysis and control design; see Section 3 of [114] and [55]. Even
in the best situation (i.e., constant and known delays), the control design
is still complex. In worst cases (time-varying delays, for instance), such approximations are potentially disastrous in terms of stability and oscillations.
• Delay properties are also profitable. Several studies have shown that voluntary introduction of delays can actually benefit the control. For instance,
Chatterjee et al. [23] introduced delayed states in an eco-epidemiological
model and the time delay factor effectively to prevent the outbreak of a
disease. In a promising control design, Abdallah et al. [1, 115] stabilized
oscillatory systems by adding a time delayed compensator to reduce the oscillatory behavior. Using a delayed state feedback, Jalili and Olgac [50] designed an optimal active vibration absorber from which substantial vibration
suppression improvement is obtained.
Model predictive control approach is becoming increasingly popular in the
process industry as delays are almost inherent there. However, time delays are
frequently the main cause of performance degradation and instability [51]. Especially, when the delay values are unknown, it is not straightforward or easy to
show that the on-line MPC optimization problem can be solved and guarantee
closed-loop stability [51]. In the following, we will review the literature considering the approaches to delay identification and the techniques proposed for model
22
predictive control for time delayed systems.
3.1
Delay identification in time delayed systems
Delays can be categorized into three types: measurement delays, input de-
lays, and state delays [69]. There is a relatively rich body of work that involves
delay identification for systems with measurement delays and input delays, see
[14, 112, 126, 109, 128, 49, 120, 11]. Unlike systems with measurement delays or
input delays, state delayed systems are infinite dimensional so the identification
is more challenging. There are only a few results which are dealing with systems
with state delays. One obvious difficulty (from both a practical and theoretical
viewpoint) is that solutions of state delayed systems are not easily differentiable
with respect to the delays, and thus many common identification techniques, such
as least squares, maximum likelihood estimator, etc., are not directly applicable
[7, 8].
Most of the recent results in the scope of identification of time delayed systems
pertain to identifiability conditions; see [8, 13, 97, 127]. Although the identifiability
criteria derived there are very powerful and elegant and refer to the identification
of the ensemble of system parameters, including the system matrices, it is not
transparent how these criteria can be employed in computational identification
procedures. The work presented in [106] is an exception in this regard. The
on-line parameter identification algorithm proposed there is mainly applicable to
identify system matrices in the case when the exact values of the delays are known
a priori. Although it is alluded that the proposed identifier can also be employed
to identify uncertain delays, the procedure is not direct in that the problem of
delay identification is basically viewed as the previous problem of system matrix
23
identification. In addition to above delay identification algorithms, some more
delay identification techniques can be found in the literature. We summarize the
existing approaches to identification of state delayed systems below.
3.1.1
Approximation approach
The use of approximation schemes in connection with parameter estimation
procedures in state delayed systems was apparently first suggested in [20]. However, Banks et al. [7], were the first to provide a rigorous assessment of the theoretical aspects of the ideas in [20] and extend them to further problems of estimation.
In the works of Banks and his co-authors [7, 8, 9], the approximation schemes applied to differential equations were posed in an abstract, operator-theoretic setting.
Banks’ results pertain to the estimation of multiple constant delays in a system of
functional differential equations. In [95], Murphy extended the ideas developed in
[7, 8, 9] to devise a parameter estimation algorithm that can be used to estimate
unknown time or state-dependent delays and other parameters. The basic concept
of the delay identification algorithm based on the approximation approach can be
summarized as follows:
Consider the nonlinear delay equation:
ẋ(t) = f (α, t, x(t), x(t − τ1 ), ..., x(t − τk )) + g(t),
a≤t≤b
(3.1)
with initial condition
−τk ≤ s ≤ 0
x(s) = φ(s),
(3.2)
where g(t) is a perturbation term, α is a coefficient-type parameter vector of
Equation (3.1), and 0 < τ1 < τ2 < ... < τk are time delays. r = (α, τ1 , ..., τk ) is the
24
parameter vector to be identified.
The identification problem is to find best ”approximation” r̂ = (α̂, τ̂1 , ..., τ̂k )
that provides the best least squares fit of the solution (of the model equation (3.1))
to observations of the output at discrete sample times. The problem may be stated
as follows:
Given g(t) and observations {ûi } at times {ti }, i = 1, ..., M, find r̂ which
minimizes
M
1X
|S(r)x(ti ; r) − ûi |2
J(r) =
2 i=1
(3.3)
where S is a given matrix and u(t; r) = S(r)x(t; r) represents the ”observable
part” of x(t; r), which is the solution to (3.1) corresponding to r.
By using a spline-based technique, the original infinite-dimensional delay
equation is approximated as an equation on a finite dimensional space X N . The
approximate identification problem associated with this approximate equation can
be stated as:
Given g(t) and observations {ûi } at times {ti }, i = 1, ..., M, find r̂N so as to
minimize
M
J N (r) =
1X
|S(r)π0 z N (ti ; r) − ûi |2
2 i=1
(3.4)
where π0 : Z → Rn is defined by π0 (ξ; ψ) = ξ and Z = Rn × Ln2 (−r, 0).
Finally, in the work of Banks et al., convergence is proved so that rN (α̂, τ̂1 , ..., τ̂k ) →
r̂(α, τ1 , ..., τk ) as N → ∞ uniformly in t ∈ [a, b].
Although the approximation approach reduces the infinite-dimensional original equations to a finite-dimensional equations, the precision of the delay estimation is considerably dependent on the dimension of the approximate space X N .
25
For most of the cases, the identifer needs N ≥ 32 for a better result (error tolerance ≤ 0.001). However, the computational effort strongly increases with a higher
dimensional index N [6].
3.1.2
Spectral approach
In [97, 96], by using the special structure of the spectral subspaces for state
delayed systems, S. Nakagiri and M. Yamamoto derived very powerful and elegant
identifiability criteria for linear retarded systems. Verduyn Lunel [124, 123]then
generalized Nakagiri’s results and gave necessary and sufficient conditions that
guarantee that the problem has a unique solution. However, it is not transparent
how these criteria can be employed in computational identification procedures.
Meanwhile, many of these results are limited to the homogeneous case (no forcing
term) and use a spectral approach involving infinite dimensional spectrum; see [32].
3.1.3
”Multi-delay” approach
Orlov et al. [13, 38, 104, 105, 106, 107] proposed different approaches to delay
identification for linear time delayed systems. Primarily, Orlov considers the linear
time delayed systems governed by differential difference equations of the form:
ẋ(t) =
n
X
i=0
[Ai x(t − τi ) + Bi u(t − τi )],
(3.5)
Along with system (3.5) the reference model is expressed as:
x̂˙ =
n
X
j=0
Âj (t)x̂(t − τˆj ) + B̂j u(t − τˆj ),
26
(3.6)
To identify delays, Orlov introduced a large number m of fictitious delays in the
identifier which may be expressed as:
x̂˙ =
m
X
j=0
Âj x̂(t − τˆj ) + B̂j u(t − τˆj ) − α4x(t),
(3.7)
and in which, by virtue of the identifiability property, the Âj and B̂j coefficients
tend to zero except for τi ' τˆj . However, the accuracy of this identification scheme
depends on the number m of possible delays. Again, the computational effort considerably increases with m.
3.1.4
The approach of Kolmanovskii & Myshkis
In [55](p.551 - 555), an adaptive delay identification scheme is proposed. The
linear time delayed systems considered are of the type:
ẋ(t) = −ax(t) + bx(t − τ + M τ (t)) + f (t)
(3.8)
where τ is the parameter to be identified under influence of perturbation M τ (t),
|4τ (t)| < τ . Along with (3.8), [55] considers reference model:
˙
x̂(t)
= −ax̂(t) + bx̂(t − τ̂ ) + f (t)
(3.9)
Let γ(t, x(t)) be a positive function, η(t, x(t)) and u(t) be continuous functions,
and (t) ≡ x(t) − x̂(t). The following identification law is then proposed:
(t)
˙ = −a(t) + b(t − τ ) + b M τ (t)ẋ(t − τ )
+b
M τ (t)2
ẍ(t − τ + θ M τ (t))
2
M τ̇ (t) = −bγ(t, x(t))ẋ(t − h)(t) − η(t, x(t))(t)/u(t)
27
(3.10)
(3.11)
By using an appropriate Lyapunov functional, one can prove that the solutions of
(3.10) and (3.11) are uniformly asymptotically stable. Hence τ → τ̂ as t → ∞
is concluded. The algorithm is complicated since the measurements of delayed
variables x(t − τ ) and ẋ(t − τ ) are required in its realization. The results are local
in that |4τ (t)| must be small enough (i.e. |4τ (t)| < τ ).
3.1.5
Variable structure approach
Recently, Drakunov et al.[32] presented a variable structure identification algorithms which is based on the use of sliding surfaces. The estimation algorithm is
designed in a way which guarantees convergence to these sliding surfaces. The idea
of such algorithm can be applied not only to linear time invariant systems, but also
more generally to systems of almost any form. In particular, it can be extended to
the identification of nonlinear functional equations. Although this approach has
a faster convergence speed as compared to the other methods (for example [107]),
the estimation accuracy is limited by the nature of sliding modes. Systems with
sliding modes are known to be numerically stiff and such methods as Runge-Kutta
can run into numerical difficulties. The example in [32] was implemented by using
a low accuracy Euler simulation algorithm. Just like the approach of Kolmanovskii
& Myshkisthe, this algorithm also requires the measurement of delayed variables
ẋ(t − τi ), where τi denotes the i-th delay estimate.
3.2
Model predictive control of time delayed systems
The MPC is the most typical control strategy based on prediction. For delay-
free systems, the MPC has received considerable attention for its stability and its
capacity to handle both constraints as well as system uncertainties as it generally
28
shows to have good tracking performance [28, 36, 53, 60, 68, 72, 76, 80, 88, 108,
110, 113]. For systems with measurement delays, MPC is also considered in the
literature [61, 101, 102]. The MPC for input-delayed systems is straightforward
because the system with input delays can be reduced to a delay-free system, see
[35, 73]for example. However, only a few MPC algorithms in time delayed systems
have been published, since time delayed systems are infinite dimensional and thus
more difficult to control and handle.
3.2.1
MPC of linear time delayed systems
Kothare and his co-authors, see [60], claimed that the proposed MPC scheme,
although presented for delay-free systems, can be extended to a linear time-varying
state-delayed system by simply employing an equivalent augmented delay-free system. However, as pointed out in [114], this is not an effective alternative for general
time delayed systems, and it could lead to a high degree of complexity in the control design.
In [66], a simple receding horizon control is suggested for continuous-time
systems with state-delays, where a reduction technique is used so that an optimal
problem for state-delayed systems can be transformed to an optimal problem for
delay-free (ordinary) systems. A set of linear matrix inequality (LMI) conditions
are proposed which enforce closed-loop stability. However, as the authors admit,
closed-loop stability is not guaranteed by design.
For the first time, in [69, 70], a general form of MPC for time delayed systems is proposed. The “general ” means that the cost functional to be minimized
includes both the state and the input weighting terms (unlike [66, 67]) over the
horizon and closed-loop stability is guaranteed by design. The general solution
29
of the proposed MPC is derived using the generalized Riccati equation. A linear
matrix inequality condition on the terminal weighting matrix for the MPC is proposed, which guarantees that the optimal cost is monotonic for the time delayed
system. Under that condition, the closed-loop stability of the MPC is proved.
However, this approach only considers linear time-invariant systems with single
delay.
Jeong and Park [51] extended the last result to discrete-time uncertain timevarying delayed systems. An MPC algorithm for uncertain discrete time-varying
systems with input constraints and single state delay is proposed there. In this
approach, an upper bound on the cost function is found first; then an optimization
problem is defined and relaxed to two other optimization problems that minimize
upper bounds of the cost function. The equivalence and feasibility of the two
optimization problems is proved under a certain assumption on the weighting matrix. Based on these properties and optimality, it can be shown that the feasible
MPC obtained from the solvable optimization problems stabilizes the closed-loop
system. However, in a remark, it is stated that closed-loop stability is not to be
hoped for if the delay value is unknown. Therefore, the upper bound for the cost
is necessarily conservative. Another weak point of this approach is that the computational burden which requires a large number of repetitions strongly increased
as better performance is called for.
In [48], Hu and Chen proposed a MPC algorithm for a class of constrained
time-invariant linear discrete-time systems with a uncertain state-delay. The proposed MPC algorithm is composed of two parts: an off-line algorithm and an
on-line algorithm. The MPC is designed in the off-line algorithm which particularly constructs an artificial Lyapunov function and assess the stability region.
30
The on-line algorithm then optimizes the control performance in terms of a given
performance index when the system’s state is within the stability region. A delaydependent stabilizing condition in an LMI form is presented to tackle the uncertainties in the time delay. However, this approach only considers a class of timeinvariant linear discrete-time systems with a single uncertain delay. The proof of
the stability properties for the algorithm is not rigorous and the extension of the
algorithm to time-varying linear discrete-time systems is not justified.
3.2.2
MPC of nonlinear time delayed systems
The literature of MPC for nonlinear time delayed systems is very limited. In
most of the results, the delays are only found in linear terms rather than in the
nonlinear terms or there is no delay in the state; see, for example, [103, 119, 129].
As far as we know, only Kwon et al. [65] had theoretically investigated the MPC
algorithm for nonlinear state-delayed systems. In this approach, a terminal weighting functional is introduced to achieve closed-loop stability. It is shown that the
non-increasing monotonicity of the optimal cost can be maintained with an additional functional inequality constraint on the terminal state trajectory. Under this
condition, the closed-loop stability of the MPC is proved. However, for general
nonlinear time delayed systems, the authors also stated that it is difficult to find
the control law based on their proposed algorithm.
31
CHAPTER 4
Delay Identification in Linear Delay Differential Systems[89, 92, 94]
The first problem to be tackled in this thesis is the delay identification in linear
time delayed systems. In this chapter, a steepest descent and a generalized Newton
algorithms are developed for identifying multiple state delays in linear time delayed systems. Unlike the algorithms proposed in [13][97][106][127], the approach
adopted here is direct in that it allows to identify delay parameter exactly. It also
can be implemented in systematic computational identification procedures.
The chapter is organized as follows: the notation and problem statement are
presented first, the identifier design is explained next. Sensitivity of the system
trajectory to the delay parameter and the pseudo-inverse operator of the associated
Fréchet derivative are calculated next, and parameter identifiability conditions are
stated in Section 4.2. In Section 4.3, the convergence of the identifier algorithms
is rigorously analyzed. Finally, a numerical example is shown in Section 4.4.
4.1
Problem statement and notation
Let Rn denote the n-dimensional Euclidean space with scalar product < ∙ | ∙ >
and norm k ∙ k and C([a, b], Rn ) be the Banach space of continuous vector functions
Δ
f : [a, b] → Rn with the usual norm k ∙ kC defined by k f kC = sups∈[a,b] k f (s) k.
Similarly, let L2 ([a, b], Rn ) denote the Hilbert space of Lebesgue square integrable
Δ Rb
vector functions with the usual inner product < f1 | f2 >2 = a f1T (s)f2 (s)ds and
32
the associated norm k ∙ k2 . Also, let L1 ([a, b], Rn ) denote the Banach space of abΔ Rb
solutely integrable functions on [a, b] with the usual norm k f k1 = a k f (s) k ds.
The class of linear time delayed systems considered here is restricted to sys-
tems whose models are given in terms of differential difference equations of the
form
d
x(t) = Ax(t) + Σki=1 Ai x(t − τi ) + f (t)
dt
(4.1)
where x(t) ∈ Rn , f (t) ∈ Rn is a continuous function that represents an exogenous
input, 0 < τ1 < τ2 < ... < τk are time delays, and the system matrices A, Ai ∈
Rn × Rn , i = 1, ..., k are constant. Let any continuously differentiable function
φ ∈ C 1 ((−∞, 0), Rn ) which satisfies limt→0− φ̇(t) = φ̇(0−) serve as the initial
condition for system (4.1), so that
−τk ≤ s ≤ 0
x(s) = φ(s),
(4.2)
Under the above conditions, there exists a unique solution (4.1) defined on [−τk , +∞)
that coincides with φ on the interval [−τk , 0].
The identification problem is stated as that of determining the values of the
Δ
constant delay parameter vector τ = [τ1 , ..., τk ] in system (4.1) under the assumption that the system matrices A, Ai , i = 1, ..., k, are known and that the input
function f and the state vector x are directly accessible for measurement at all
times.
The identifiability conditions will be derived in the process of the construction
of an identification algorithm.
With the system model (4.1), let the real system be represented by
d
x̂(t) = Ax̂(t) + Σki=1 Ai x̂(t − τ̂i ) + f (t)
dt
33
(4.3)
with τ̂1 < τ̂2 < ... < τ̂k , and be equipped with the same initial condition
−τ̂k ≤ s ≤ 0
x̂(s) = φ(s),
(4.4)
Definition 4.1.1
System (4.1) is said to be locally identifiable if for a given observation time T > 0
there exists a neighborhood, B(τ̂ ; r), r > 0, of the nominal system delay parameter
Δ
vector, τ̂ = [τ̂1 , ..., τ̂k ], and a system input function f such that the identity
x(t) = x̂(t)
for t ∈ [−τˆk , T ]
(4.5)
implies that τ = τ̂ , or else τ ∈
/ B(τ̂ ; r), regardless of the initial function φ.
4.2
Identifier design
For any given initial and input functions φ and f satisfying the above as-
sumptions, let H : τ 7→ x(∙) be the operator that maps the delay parameter vector
τ = [τ1 , ..., τk ] into the trajectory x(t), t ∈ [0, T ] of system (4.1). Notwithstanding
the fact that the trajectories of system (4.1) are absolutely continuous functions,
the operator H will be regarded to act between the spaces H : Rk → L2 ([0, T ], Rn ).
Let D(H) and R(H) denote the domain and the range of an operator H,
respectively.
With this definition, the identification problem translates into the solution of
the following nonlinear operator equation, which assumes that x̂ is given as the
34
measured trajectory:
Δ
F (τ, x̂) = H(τ ) − x̂ = 0
Δ
where x̂ ∈ L2 ([0, T ], Rn ), τ = [τ1 , ..., τk ]
(4.6)
The reason for the particular choice of the range space should be clear. Measurements are seldom exact, thus exact solution of the above operator equation
is not feasible if the measured x̂ is such that x̂ ∈
/ R(H). In this situation, a
well designed robust identifier algorithm should have the ability to compensate for
such errors by delivering the best approximation of a solution. This is possible to
achieve in Hilbert spaces in which the Projection Theorem holds.
The solution of the nonlinear operator equation (4.6) is thus approached using
the tools of optimization theory, by introducing the cost functional to be minimized
with respect to the unknown variable τ as follows:
Δ
Ψ(τ, x̂) = 0.5 < F (τ, x̂) | F (τ, x̂) >2 = 0.5 k H(τ ) − x̂ k22 ,
for x̂ ∈ L2 ([0, T ], Rn ), τ ∈ Rk
(4.7)
As the reference model satisfies H(τ̂ ) = x̂ the minimum of this cost is zero if all
the measurements are exact.
Δ
Suppose further that the Fréchet derivative Y =
∂
F
∂τ
=
∂
H
∂τ
: h → δx, with
h ∈ Rk , δx ∈ L2 ([0, T ]; Rn ), exists for all τ ∈ Rk and that it is continuous as a
function of τ . If the solution to (4.6) were approached using the standard Implicit
Function Theorem of Hildebrandt and Graves [130] p.150, then the inverse of the
Fréchet derivative Y (τ̂ )−1 would have to exist as a continuous linear operator
acting between the spaces L2 ([0, T ], Rn ) and Rk . By virtue of the Open Mapping
35
Theorem, the latter condition would be equivalent to requesting that Y (τ̂ ) : Rk →
L2 ([0, T ], Rn ) is a bijection. However, in the case considered, the Fréchet derivative
is not surjective.
In this situation, an alternative approach based on minimization of the cost
functional Ψ is a well justified option. It is easy to verify that the gradient of the
cost functional is given by
gradτ Ψ(τ, x̂) = Y (τ )∗ F (τ, x̂)
(4.8)
where Y (τ )∗ is the Hilbert adjoint of the operator Y (τ ). The following steepest
descent procedure can then be used to minimize Ψ
τ n+1 = τ n − α1 (τ n )Y (τ n )∗ F (τ n , x̂)
(4.9)
with some suitable step size function α1 : Rk → (0, +∞).
A yet better search direction than the gradient can be derived by seeking an approximate solution to the linearized operator equation (4.6) in the neighbourhood
of a current approximation τ n :
H(τ n ) + Y (τ n )(τ n+1 − τ n ) − x̂ = 0
(4.10)
Since x̂ − H(τ n ) may fail to be a member of R(Y (τ n )) a ”least squares” solution
to (4.10) calls for the calculation of
min{k x̂ − H(τ n ) − Y (τ n )(τ n+1 − τ n ) k22
36
| (τ n+1 − τ n ) ∈ Rk }
(4.11)
Clearly, the argument minimum in the above is delivered by the pseudo-inverse
operator to the Fréchet derivative as follows:
τ n+1 − τ n = Y (τ n )† [x̂ − H(τ n )]
Y (τ n )† = [Y (τ n )∗ Y (τ n )]−1 Y (τ n )∗
(4.12)
that exists whenever the range R(Y (τ n )) is a closed subspace of L2 ([0, T ], Rn ).
The above leads to a generalized Newton iteration:
τ n+1 = τ n − α2 (τ n )Y (τ n )† F (τ n , x̂)
(4.13)
where τ n is the approximation to the delay parameter vector in iteration step
n. Again, α2 : Rk → (0, +∞) represents a step size function used to achieve
convergence. The pseudo-inverse in (4.12) can be computed whenever the operator
Y ∗ Y is invertible. Conditions for this are provided in the sequel and, as expected,
are associated with identifiability of the system that in turn is guaranteed by a
certain type of controllability; see [13].
Further development hence hinges on the existence, calculation, and properties of the Fréchet derivative Y (τ ) : Rk → L2 ([0, T ], Rn ) as established below.
Although differentiability of solutions to time-delay systems, with respect to initial
conditions as well as perturbations of the right hand side of the system equation,
has already been demonstrated in full generality in [45], p.49, the relevant result
(Theorem 4.1) is not easily interpreted with regard to system (4.1). Hence, a direct
calculation of the derivative is provided in full as it is necessary for the iterative
algorithms (4.9) and (4.13).
In this development, the following auxiliary results will be found helpful.
37
Lemma 4.2.1. For any constant perturbation vector h ∈ Rk , let x(t; τ + h), t ∈
[0, T ] denote the solution of the system (4.1) that corresponds to the delay parameter vector τ + h and the given functions φ and f . Similarly, let x(t; τ ), t ∈ [0, T ]
be the unperturbed trajectory as it corresponds to τ . Under the assumptions made,
the trajectories x(t; τ + h), t ∈ [0, T ] converge uniformly to x(t; τ ), t ∈ [0, T ] as
k h k→ 0, i.e.
k x(∙; τ + h) − x(∙; τ ) kC → 0
as k h k→ 0
(4.14)
The proof is omitted as it is a version of a general result on continuous dependence of the solutions of (4.1) on parameters; see [45], Theorem 2.2, p.43.
Corollary 4.2.1. Under the assumptions of Lemma 4.2.1, the time derivatives
d
x(t; τ +h), t
dt
∈ [0, T ] are absolutely integrable functions and converge to
d
x(t; τ ), t
dt
[0, T ] as k h k→ 0, in the sense that
k
d
d
x(∙; τ + h) − x(∙; τ ) k1 → 0 as k h k→ 0
dt
dt
(4.15)
Proof. The above convergence result is a direct consequence of Lemma 4.2.1 and
the linearity of the system model (4.1).
Lemma 4.2.2. Let hi ∈ R and : (t, hi ) ∈ R2 → Rn be an absolutely integrable
function such that
k (∙, hi ) k1 → 0 as |hi | → 0. Then the solutions of the non-
homogenous equation:
d
zi (t) = Azi (t) + Σkj=1 Aj zi (t − τj ) + (t, hi )
dt
zi (s) = 0
for − τk ≤ s ≤ 0
38
(4.16)
∈
satisfy k zi (∙) kC → 0 as |hi | → 0.
Proof. The solution of the homogeneous equation with zero initial condition
d
z(t) = Az(t) + Σkj=1 Aj z(t − τj )
dt
z(s) = 0 for − τk ≤ s ≤ 0
(4.17)
is clearly z(t) ≡ 0, t ∈ [0, T ]. Let Z(t) be the fundamental matrix solution for
(4.16), i.e.
d
Z(t) = AZ(t) + Σkj=1 Aj Z(t − τj )
dt
Z(s) = 0 for − τk ≤ s < 0
and Z(0) = I
(4.18)
It is well known that such matrix function Z exists for t ≥ 0, [45], p. 18, and
that the solution of (4.16) is given by the variation of constants formula
zi (t) = z(t) +
Z
t
0
Z(t − s)(s, hi )ds, t ∈ [0, T ]
(4.19)
where z represents the solution of the homogeneous equation (4.17).
Let μ > 0 be a constant such that k Z(s)e k≤ μ k e k for all s ∈ [0, T ] and all
vectors e ∈ Rn . Hence, as z(t) ≡ 0,
Z
t
k zi (t) k≤
k Z(t − s)(s, hi ) k ds
0
Z t
k (s, hi ) k ds, t ∈ [0, T ]
≤μ
(4.20)
0
so that
k zi (∙) kC ≤ μ k (∙, hi ) k1 → 0 as |hi | → 0
39
(4.21)
as claimed.
Δ
Proposition 4.2.1. The Fréchet derivative Y =
∂
H
∂τ
exists for all τ ∈ Rk as a
linear and bounded operator: Y : Rk → L2 ([0, T ], Rn ) and is given by a matrix
Δ
function Y (t; τ ), t ∈ [0, T ] whose columns yi (t; τ ) =
∂
H(τ ), i
∂τi
= 1, ..., k satisfy the
following equation on the interval t ∈ [0, T ]:
d
d
yi (t) = Ayi (t) + Σkj=1 Aj yi (t − τj ) + Ai x(t − τi )
dt
dt
yi (s) = 0 for s ∈ [−τk , 0]
(4.22)
where x(s), s ∈ [−τk , T ] is the solution of system (4.1) corresponding to delay
parameter vector τ and the given functions φ and f .
Proof. Under the assumptions made, the solution to (4.22) exists and is unique on
any interval [0, T ] as the derivatives on the right hand side are absolutely integrable
functions (in fact with, at most, a finite number of discontinuities of the first kind).
For a given constant hi ∈ R let
Δ
xhi (t) = x(t; τ1 , ..., τi + hi , ..., τk );
Δ
x(t) = x(t; τ1 , ..., τi , ..., τk );
Δ
m(t) = xhi (t) − x(t), t ∈ [0, T ]
(4.23)
where x(t; τ1 , ..., τi , ..., τk ), t ∈ [0, T ] denotes the trajectory of (4.1) with time delay
parameter τ = [τ1 , ..., τk ].
Since the system equation (4.1) can be equivalently re-written in the form,
40
see [45], pp.14, 35:
At
Σkj=1
Z
t
eA(t−s) Aj x(s − τj )ds
x(t) = e φ(0) +
0
Z t
+
eA(t−s) f (s)ds, t ∈ [0, T ]
(4.24)
0
then, for all t ∈ [0, T ],
m(t)
Z
k
t
eA(t−s) Aj [xhi (s − τj ) − x(s − τj )]ds
= Σ j=1
j6=i
Z t0
+
eA(t−s) Ai [xhi (s − τi − hi ) − x(s − τi )]ds
0
Z t
k
eA(t−s) Aj m(s − τj )]ds
= Σ j=1
j6=i
Z t0
eA(t−s) Ai [m(s − τi − hi ) + x(s − τi − hi ) − x(s − τi )]ds
+
0
Z t
k
= Σj=1
eA(t−s) Aj m(s − τj )ds
Z t0
eA(t−s) Ai [m(s − τi − hi ) − m(s − τi )]ds
+
Z0 t
+
eA(t−s) Ai [x(s − τi − hi ) − x(s − τi )]ds
(4.25)
0
Also,
yi (t) =
Σkj=1
Z
t
e
0
A(t−s)
Aj yi (s − τj )ds +
41
Z
t
eA(t−s) Ai
0
d
x(s − τi )ds
ds
(4.26)
for all t ∈ [0, T ]. For all s ∈ [0, T ], define
Δ
zi (s) =
m(s, τi ) − yi (s)hi
,
hi
Δ
Δxhi (s − τi ) = xhi (s − τi − hi ) − xhi (s − τi ),
Δ
Δx(s − τi ) = x(s − τi − hi ) − x(s − τi ),
Δ
Δm(s − τi ) = m(s − τi − hi ) − m(s − τi )
(4.27)
For any function r(h) : Rk → R, let the statement r(h) = o(k h k) signify that
r(h)/ k h k→ 0 as k h k→ 0. Clearly,
d hi
x (s − τi )hi k1 = o(|hi |),
ds
d
k Δx(s − τi ) − x(s − τi )hi k1 = o(|hi |)
ds
k Δxhi (s − τi ) −
and, by virtue of Corollary 4.2.1,
k Δm(s − τi ) k1
= k Δxhi (s − τi ) − Δx(s − τi ) k1
d hi
x (s − τi )hi k1
ds
d
+ k Δx(s − τi ) − x(s − τi )hi k1
ds
d hi
d
+ | hi | k x (s − τi ) − xhi (s − τi ) k1
ds
ds
≤ k Δxhi (s − τi ) −
= o(|hi |)
(4.28)
It then follows from (4.25)-(4.28) that
zi (t) =
Σkj=1
Z
t
e
0
A(t−s)
Aj zi (s − τj )ds +
42
Z
t
eA(t−s) (s, hi )ds
0
(4.29)
for some function that satisfies k (∙, hi ) k1 → 0 as |hi | → 0. Thus, the function
zi satisfies the differential difference equation:
d
zi (t) = A(t)zi (t) + Σkj=1 Aj zi (t − τj ) + (t, hi )
dt
(4.30)
with initial condition zi (s) = 0 for all τk ≤ s ≤ 0. It follows from Lemma 4.2.2 that
√
k zi k2 ≤ T k zi kC → 0 as |hi | → 0 which proves that the partial Fréchet derivative
∂
H(τ )
∂τi
is indeed given by the solution of equation (4.22). The total Fréchet
derivative is now seen to be given by Y = [ ∂τ∂1 H(τ ), ..., ∂τ∂k H(τ )] = [y1 , ..., yk ], which
follows from the fact that the partial derivatives are all continuous in τ . The last
is a consequence of Lemma 4.2.1 and Corollary 4.2.1 . The differential operator
Y : h 7→ Y h is clearly linear and it is bounded as it is finite-dimensional.
Remark 4.2.1. It should be noted that the range of the Fréchet derivative R(Y ) is
a finite dimensional subspace of L2 ([0, T ], Rn ) as dim R(Y ) ≤ dim D(Y ) = k and
is thus closed. This makes the projection operator well defined and guarantees that
the minimum of the cost Ψ is achieved. The adjoint operator Y ∗ : R(Y ) → Rk is
calculated as follows
< Y h | x >2 =
Z
T
T
T
h Y (s)x(s)ds = h
0
T
Z
T
0
Y T (s)x(s)ds =< h | Y ∗ x >
for all h ∈ R , x ∈ L ([0, T ], R ), so that
Z T
∗
Y x=
Y T (s)x(s)ds, for all x ∈ L2 ([0, T ], Rn )
k
2
n
(4.31)
0
Furthermore, Y ∗ Y : Rk → Rk is given by
∗
Y Yh=
Z
T
Y T (s)Y (s)ds h,
0
43
for all h ∈ Rk
(4.32)
It should be noted that all the above operators depend continuously on the
delay parameter vector τ ∈ Rk . For any given vector τ , invertibility of the matrix
Y ∗ Y (τ ) is guaranteed under the following conditions.
Proposition 4.2.2. The following conditions are equivalent:
(a) the matrix Y ∗ Y (τ ) is non-singular;
(b) the columns, yi (∙; τ ), i = 1, ..., k of Y (∙; τ ) are linearly independent functions
in C([0, T ], Rn );
(c) the exogenous forcing function f is such that the transformed velocities :
Δ
vi (t) = Ai dtd x(t − τi ), i = 1, ..., k, a.e. t ∈ [0, T ] are linearly independent in
L1 ([0, T ], Rn ).
Proof. Clearly, the matrix Y ∗ Y (τ ) is invertible if and only if it is positive definite.
The columns yi (∙; τ ), i = 1, ..., k of Y are linearly independent if and only if the
only vector h ∈ Rk that satisfies Y (τ )h = 0 is h = 0. This shows equivalence of
(a) and (b) as
T
∗
h Y Y (τ )h =
Z
T
0
k Y (s; τ )h k2 ds = 0
if and only if h = 0
(4.33)
Now, the columns yi (∙; τ ) are linearly dependent if and only if there exist
constants hi , i = 1, ..., k, not all zero, such that
Δ
z(t) = Σki=1 hi yi (t; τ ) ≡ 0 for all t ∈ [−τk , T ]
44
(4.34)
It follows from (4.22) that z satisfies
d
z(t) = Σki=1 hi Ayi (t; τ ) + Σki=1 hi Σkj=1 Aj yi (t − τj ; τ )
dt
d
+Σki=1 hi Ai x(t − τi )
dt
d
= Az(t) + Σkj=1 Aj z(t − τj ) + Σki=1 hi Ai x(t − τi ), t ∈ [0, T ]
dt
(4.35)
z(s) = 0 for s ∈ [−τk , 0]
Equation (4.34) is equivalent to
d
z(t)
dt
= 0 a.e on t ∈ [−τk , T ] that, by virtue
of (4.35), is further equivalent to
Σki=1 hi Ai
d
x(t − τi ) = 0 a.e. t ∈ [0, T ]
dt
(4.36)
This establishes equivalence between conditions (b) and (c).
The last result also delivers a condition for local identifiability.
Proposition 4.2.3. Suppose that the exogenous input function f is such that the
transformed system velocities, vi (∙; τ ), i = 1, ...k, as defined in Proposition 4.2.2,
(c), are linearly independent for all τ ∈ Rk . Then the system (4.1) is locally
identifiable as specified by Definition 4.1.1. Additionally, for every closed ball
ˉ ; r) ⊂ Rk , r > 0 there exists a constant cr > 0 such that
B(τ̂
k Y (τ )h k2
ˉ ; r)
≥ cr k h k for all h ∈ Rk , τ ∈ B(τ̂
(4.37)
There also exists a constant 0 < ρ ≤ r such that
k H(τ ) − H(τ̂ ) − Y (τ )h k2
k H(τ̂ ) − H(τ ) k2
≤ 0.5cr k h k
ˉ ; ρ)
≥ 0.5cr k τ̂ − τ k for all τ ∈ B(τ̂
45
(4.38)
(4.39)
Proof. By virtue of Proposition 4.2.2, for every τ ∈ Rk , there exists a constant
c(τ ) > 0 such that
k Y (τ )h k2 =k Σki=1 hi yi (∙; τ ) k2 ≥ c(τ ) k h k
for all h = [h1 , ..., hk ] ∈ Rk
(4.40)
The above is often used as a criterion for linear independence of vectors
yi (∙; τ ), i = 1, ..., k. For every r > 0 the existence of the constant cr in (4.37)
follows from the continuity of the Fréchet derivative Y (τ ) with respect to τ , and
ˉ ; r) ⊂ Rk , r > 0.
compactness of B(τ̂
Next, from the definition of the Fréchet derivative Y (τ ), it follows that there
exists a constant ρ ≤ r such that (4.38) holds. It follows that
k H(τ̂ ) − H(τ ) k2
≥k Y (τ )(τ̂ − τ ) k2 − k H(τ̂ ) − H(τ ) − Y (τ )(τ̂ − τ ) k2
ˉ ; ρ)
≥ 0.5cr k τ̂ − τ k for all τ ∈ B(τ̂
(4.41)
which proves (4.39).
Now, suppose that x(t) = x̂(t) for t ∈ [−τˆk , T ], but that the τ that produces
ˉ ; ρ) be such that τ ∈ B(τ̂
ˉ ; ρ). It follows from
H(τ ) = x is such that τ 6= τ̂ . Let B(τ̂
(4.41) that
0 =k x(∙) − x̂(∙) k2 ≥ 0.5cr k τ − τ̂ k
(4.42)
a contradiction. Hence τ = τ̂ , which proves local identifiability.
Remark 4.2.2. The identifiability condition (c) of Proposition 4.2.2 can be checked
for the measured system trajectory x̂. By continuity, it is bound to hold in some
neighborhood of this trajectory.
46
Under the conditions of Proposition (4.2.2) the pseudo-inverse operator is well
defined and the gradient and generalized Newton iterations for the minimization
of Ψ are given by
τ
τ
n+1
n+1
n
n
= τ − α1 (τ ) ∙
n
n
Z
T
0
Z
= τ − α2 (τ ) ∙ [
Z
Y T (s; τ n )[H(τ n )(s) − x̂(s)]ds
0
T
0
(4.43)
T
Y T (s; τ n )Y (s; τ n )ds]−1 ∙
Y T (s; τ n )[H(τ n )(s) − x̂(s)]ds
(4.44)
where H(τ n )(s), s ∈ [0, T ] is the system trajectory corresponding to the delay parameter vector τ n in iteration step n, for the given input and initial functions f
and φ, and where x̂(s), s ∈ [0, T ] is the measured trajectory.
4.3
Convergence analysis for the delay identifier
The gradient and generalized Newton search directions are zero only at the
stationary points of the minimized functional Ψ(τ, x̂) i.e. at points τ ∈ Rk at
which
gradτ Ψ(τ, x̂) = Y (τ )∗ F (τ, x̂) = 0
(4.45)
Away from the stationary points, the functional Ψ is decreasing along both
the gradient and the generalized Newton directions, as then their inner product
with the gradient is negative
< gradτ Ψ(τ, x̂) | −α1 (τ ) Y (τ )∗ F (τ, x̂) >= −α1 (τ ) k Y (τ )∗ F (τ, x̂) k22 < 0 (4.46)
< gradτ Ψ(τ, x̂) | −α2 (τ )Y (τ )† F (τ, x̂) >
= −α2 (τ ) < Y ∗ (τ )F (τ, x̂) | [Y (τ )∗ Y (τ )]−1 ∙ Y ∗ (τ )F (τ, x̂) > < 0
47
(4.47)
The last holds as, under the assumptions of Proposition 4.2.2, the matrix Y ∗ Y
is positive definite, so that its inverse is also positive definite.
The following auxiliary result is needed to demonstrate desirable properties
of the cost functional Ψ that are needed if the last is to be used as a Lyapunov
function in the convergence analysis for the identifier algorithms.
Proposition 4.3.1. Suppose that the identifiability assumption of Proposition
ˉ ; ρ) ⊂ Rk and posi4.2.2, (c), is satisfied. Then, there exists a closed ball B(τ̂
tive constants γr1 , γr2 , γr3 such that
(i) the cost Ψ is a positive definite and decrescent function in the increment
Δ
ˉ ; ρ), i.e.
h = τ̂ − τ , on the ball B(τ̂
ˉ ; ρ)
γr1 k τ̂ − τ k2 ≤ Ψ(τ, x̂) ≤ γr2 k τ̂ − τ k2 for all τ ∈ B(τ̂
(4.48)
ˉ ; ρ) in the sense that
(ii) the gradient of the cost is non-vanishing on B(τ̂
ˉ ; ρ)
k gradτ Ψ(τ, x̂) k22 ≥ γr3 k τ̂ − τ k2 for all τ ∈ B(τ̂
(4.49)
Proof. Let ρ be as in Proposition 4.2.3. The existence of γr1 follows directly from
(4.39) as Ψ(τ, x̂) =k H(τ̂ )−H(τ ) k22 . The existence of γr2 follows from the fact that
H is continuously differentiable with respect to τ and hence Lipschitz continuous
ˉ ; ρ).
on any compact set such as B(τ̂
Next, it is easy to verify that for h as defined above,
k H(τ̂ ) − H(τ ) − Y (τ )h k22
=k H(τ̂ ) − H(τ ) k22 + k Y (τ )h k22 −2 < Y (τ )h | H(τ̂ ) − H(τ ) >2
48
(4.50)
By (4.37)-(4.39), since
0.25c2r k h k2
≥k H(τ̂ ) − H(τ ) − Y (τ )h k22
≥ 0.25c2r k h k2 +0.5c2r k h k2 −2 < h | Y ∗ (τ )[H(τ̂ ) − H(τ )] >
(4.51)
ˉ ; ρ). It follows that
for all τ ∈ B(τ̂
−0.5c2r k h k2 ≥ −2 < h | −Y ∗ (τ )F (τ, x̂) >
(4.52)
and hence, by the Schwartz inequality, that
0.25c2r k h k2 ≤ < h | −Y ∗ (τ )F (τ, x̂) >
≤k h k k gradτ Ψ(τ, x̂) k2
(4.53)
k gradτ Ψ(τ, x̂) k22 ≥ 0.0625c4r k h k2
(4.54)
Therefore,
ˉ ; ρ), as required.
for all τ ∈ B(τ̂
The result below now provides the convergence analysis of the identifier algorithm. It considers ”continuous-time” versions of the gradient and Newton algorithms as described by the solutions of the gradient and Newton flow equations.
Theorem 4.3.1. Suppose that the identifiability condition is satisfied and ρ is
defined in Proposition 4.2.3. For some functions αj : [0, +∞) → [0, +∞), j =
1, 2, which are continuous, strictly increasing, and satisfying αj (0) = 0, j = 1, 2,
49
consider the following systems for the gradient and Newton flows, respectively,
Z T
d
Y T (s; τ )[H(τ )(s) − x̂(s)]ds
τ = −α1 (k h k) ∙
dt
0
Z T
d
Y T (s; τ )Y (s; τ )ds]−1 ∙
τ = −α2 (k h k)[
dt
0
Z T
Y T (s; τ )[H(τ )(s) − x̂(s)]ds
(4.55)
(4.56)
0
Δ
with h = τ − τ̂ . Under these conditions, there exists a constant δ > 0 such that any
ˉ ; δ)
solution of (4.55) or (4.56) emanating from an initial condition τ (0) = τ0 ∈ B(τ̂
ˉ ; ρ) for all times
is continuable to the interval t ∈ [0, +∞), remains in the ball B(τ̂
t ≥ 0, and converges to τ̂ such that H(τ̂ ) = x̂.
Proof. Local solutions to(4.55) and (4.56) exist by virtue of the Peano Theorem,
as the right hand sides are continuous with respect to τ . Continuation of solutions
over the interval [0, ∞) holds in the absence of finite escape times which will be
shown next.
Δ
Consider Ψ(τ, x̂) as a function in the argument h = τ̂ − τ . For systems (4.56)
the derivatives of Ψ along the corresponding system trajectories satisfy
d
Ψ(τ, x̂) = −α1 (k h k)∙ < gradτ Ψ(τ, x̂) | Y ∗ (τ )F (τ, x̂) >
dt
= −α1 (k h k) k gradτ Ψ(τ, x̂) k22
≤ −α1 (k h k)γr3 k τ̂ − τ k2
50
(4.57)
and
d
Ψ(τ, x̂) = −α2 (k h k) < gradτ Ψ(τ, x̂) | [Y ∗ Y (t, τ )]−1 gradτ Ψ(τ, x̂) >
dt
≤ −α2 (k h k) λmin {[Y ∗ Y (t, τ )]−1 }∙ k gradτ Ψ(τ, x̂) k22
≤ −α2 (k h k) λmin {[Y ∗ Y (t, τ )]−1 } ∙ γr3 k τ̂ − τ k2
(4.58)
ˉ ; ρ), where λmin denotes the smallest eigenvalue
as long as τ remains in the ball B(τ̂
of the positive definite matrix [Y ∗ Y ]−1 . This, together with Proposition 4.2.3
ˉ ; ρ)) positive definite and decrescent
imply that the cost Ψ(τ, x̂) is a locally (in B(τ̂
Lyapunov function for systems (4.55) and (4.56). It then follows from the standard
Lyapunov’s direct approach; see e.g. [125], Theorem 5.3.2 on p. 165, that there
ˉ ; δ), δ ≤ ρ such that the trajectories emanating from any
exists a neighborhood B(τ̂
ˉ ; ρ) for all times t ≥
initial conditions τ (0) = τ0 in that neighborhood, remain in B(τ̂
0, and hence cannot have finite escape times. By strict negative definiteness of the
derivatives (4.57)-(4.58), all such trajectories must converge to the asymptotically
stable equilibrium τ = τ̂ , as claimed.
Corollary 4.3.1. Suppose that the identifiability condition is satisfied as in Proposition 4.2.3. There exists functions αj : Rk → [0, +∞), j = 1, 2 such that the
gradient and generalized Newton identifier algorithms (4.43), (4.44) are locally
convergent to the true delay parameter vector τ̂ , with H(τ̂ ) = x̂.
Proof. The details of the proof are omitted as the result follows directly from
Theorem 4.3.1 by considering the system (4.56) in discrete time.
51
Remark 4.3.1. The simplest choice for the functions αj are sufficiently small
positive constants.
4.4
Numerical examples
Examples are presented in this section to demonstrate the effectiveness
of the generalized Newton type delay identification algorithm. Figure 4-1 shows
the block diagram of the generalized Newton algorithm proposed in this chapter.
Initialize
Calculate
cost
functional
Stop
Condition
Update
delay
estimate
yes
Final
Result
No
Set new
search
direction
Generalized
Newton
iteration
Figure 4–1: The generalized Newton type algorithm for delay identification
52
4.4.1
Example 1:[8]
The numerical example considered here is a linear multiple-delay system which
could represent a multi-compartment transport model, see [8, 42]. Such model
plays a key role in understanding many processes in biological and medical sciences.
The system here is expressed as:
ẋ(t) = −0.5x(t) + 3x(t − τ1 ) + x(t − τ2 )
(4.59)
where τ1 and τ2 are parameters to be identified and initial condition is:
x(Θ) = −0.75Θ2 − 3Θ, − 4 ≤ Θ ≤ 0.
(4.60)
In this example, true values are τ̂1 = 1, τ̂2 = 2 and initial guess values are
τ10 = 1.3, τ20 = 1.7. The step size function is α2 =0.75 and the stop condition
is kτ M +1 − τ M k2 ≤ 10−8 . Table 4–1 displays the consecutive estimates is a function of the current number of iteration (i.e. M). Figure 4–2 shows the convergence
of the algorithm to the true values of the delays.
M
2
4
6
8
10
τ1M
1.0730
1.0024
1.0001
1.0000
1.0000
τ2M
1.8410
1.9884
1.9993
2.0000
2.0000
Table 4–1: Parametric values of Example 1
53
2.5
delay1
delay2
Parameter Values
2
1.5
1
0.5
0
1
2
3
4
5
6
Iteration Number M
7
8
9
10
11
Figure 4–2: Delay identification in Example 1 with true values τ̂1 =1, τ̂2 =2, and
initial values τ10 = 1.3, τ20 = 1.7
4.4.2
Example 2:[132]
The second example considered here is the river pollution control system [132].
Let p(t) and q(t) denote the concentration per unit volume of biochemical oxygen
demand and dissolved oxygen, respectively, at time t, in a section of a polluted
river. Let p∗ and q ∗ be the desired steady state values of p(t) and q(t). Define
x1 (t) = p(t) − p∗ , x2 (t) = q(t) − q ∗ , x(t) = [x1 (t) x2 (t)]T .
Then the system can be described as:
(4.61)
ẋ(t) = Ax(t) + A1 x(t − τ1 ) + A2 x(t − τ2 ) + Bu(t),








−2.6 0 
0.3 0
0.42 0 
0.28 0 
where A = 
, A1 = 
.
 , A2 = 
, B = 
−1.6 −2
0 1
0 0.42
0 0.28
54
τ1 and τ2 are parameters to be identified and initial condition is:
 
1
x(Θ) =   − 2 ≤ Θ ≤ 0.
−1
(4.62)
In this example, true values are τ̂1 = 1, τ̂2 = 2 and initial guess values are
τ10 = 1.55, τ20 = 1.45. The step size function is α2 =0.5 and the stop condition is
kτ M +1 − τ M k2 ≤ 10−8 . The consecutive estimates are shown in Table 4–2 . The
step size function is α2 =0.5 and the stop condition is kτ M +1 − τ M k2 ≤ 10−8 . The
convergence is illustrated in Figure 4–3.
τ1M
1.0980
0.9982
0.9924
0.9962
0.9999
M
2
4
8
10
19
τ2M
1.7140
1.8795
1.9801
1.9920
1.9999
Table 4–2: Parametric value of Example 2
2.2
delay1
delay2
2
Parameter Values
1.8
1.6
1.4
1.2
1
0.8
0
2
4
6
8
10
Iteration Number M
12
14
16
18
Figure 4–3: Delay identification with true values τ̂1 =1, τ̂2 =2, and initial values
τ10 = 1.55, τ20 = 1.45
55
CHAPTER 5
Delay Identification in Nonlinear Delay Differential Systems[90, 94]
The technique for delay identification proposed in the previous chapter is now
extended to nonlinear delayed systems.
The approach adopted here applies to nonlinear systems and allows to identify
delay parameters exactly. The delay identification problem is first posed as a least
squares optimization problem in a Hilbert space. The cost function is defined as
the square of the distance to the measured system trajectory. The gradient of
the cost involves calculation of the Fréchet derivative of the mapping of the delay
parameter vector into a system trajectory, i.e. the sensitivity of the system’s state
to the change in the delay values. A generalized Newton type identifier algorithm
is shown to converge locally to the true value of the delay parameter vector.
The chapter is organized as follows: the notation and problem statement is
presented in Section 5.1, the identifier design is explained next. Sensitivity of the
system trajectory to the delay parameter and the pseudo-inverse operator of the
associated Fréchet derivative are calculated, and parameter identifiability conditions are stated in Section 5.2. In Section 5.3, the convergence of the identifier
algorithms is rigorously analyzed. The computational technique based on calculating the Fréchet derivative is presented in Section 5.4. Finally, numerical examples
are presented in Section 5.5.
56
5.1
Problem statement and notation
Let Rn denote the n-dimensional Euclidean space with scalar product < ∙ | ∙ >
and norm k ∙ k and C([a, b], Rn ) be the Banach space of continuous vector functions
Δ
f : [a, b] → Rn with the usual norm k ∙ kC defined by k f kC = sups∈[a,b] k f (s) k.
Similarly, let L2 ([a, b], Rn ) denote the Hilbert space of Lebesgue square integrable
Δ Rb
vector functions with the usual inner product < f1 | f2 >2 = a f1T (s)f2 (s)ds
and the associated norm k ∙ k2 . Also, let L1 ([a, b], Rn ) denote the Banach space of
Δ Rb
absolutely integrable functions on [a, b] with the usual norm k f k1 = a k f (s) k ds.
The class of nonlinear time-delay systems considered here is restricted to
systems whose models are given in terms of differential difference equations of the
form
d
x(t) = f (x(t), x(t − τ1 ), ..., x(t − τk ), u(t))
dt
(5.1)
where x(t) ∈ Rn , u(t) ∈ Rm is a continuous and uniformly bounded function that
represents an exogenous input, and 0 < τ1 < τ2 < ... < τk are time delays. The
following assumption is made about the function f on the right hand side of the
system equation (5.1):
[A1] The function f : Rn × ... × Rn × Rm → Rn is continuously differentiable
0
and the partial derivatives f|10 , ..., f|k+2
with respect to all the k + 2 vector
arguments of f , are uniformly bounded, i.e. there exists a constant M > 0
such that
||f|i0 (x0 , x1 , ..., xk , u)|| ≤ M, i = 1, ..., k + 2
for all (x0 , x1 , ..., xk , u) ∈ Rn × ... × Rn × Rm .
57
(5.2)
Let a continuously differentiable function φ ∈ C 1 ((−∞, 0), Rn ), satisfying limt→0− φ̇(t) =
φ̇(0−) serve as the initial condition for system (5.1), so that
−τk ≤ s ≤ 0
x(s) = φ(s),
(5.3)
Under the above conditions, there exists a unique solution for system (5.1) defined
on [−τk , +∞) that coincides with φ on the interval [−τk , 0]; see [122].
The identification problem is stated as that of determining the values of the
Δ
constant delay parameter vector τ = [τ1 , ..., τk ] in system (5.1) under the assumption that the input function u and the state vector x are directly accessible for
measurement at all times.
The identifiability conditions will be derived in the process of the construction
of an identification algorithm.
With the system model (5.1), let the real system be represented by
d
x̂(t) = f (x̂(t), x̂(t − τ̂1 ), ..., x̂(t − τ̂k ), u(t))
dt
(5.4)
with 0 < τ̂1 < τ̂2 < ... < τ̂k , and be equipped with the same initial condition
−τ̂k ≤ s ≤ 0
x̂(s) = φ(s),
(5.5)
Definition 5.1.1
System (5.1) is said to be locally identifiable if for a given observation time T > 0
there exists a neighborhood, B(τ̂ ; r), r > 0, of the nominal system delay parameter
Δ
vector, τ̂ = [τ̂1 , ..., τ̂k ], and a system input function u such that the identity
x(t) = x̂(t)
for t ∈ [−τˆk , T ]
58
(5.6)
implies that τ = τ̂ , or else that τ ∈
/ B(τ̂ ; r), regardless of the initial function φ.
5.2
Identifier design
For any given initial and input functions φ and u satisfying the above as-
sumptions, let H : τ 7→ x(∙) be the operator that maps the delay parameter vector
τ = [τ1 , ..., τk ] into the trajectory x(t), t ∈ [0, T ] of system (5.1). Notwithstanding the fact that the trajectories of (5.1) are absolutely continuous functions, the
operator H will be regarded to act between the spaces H : Rk → L2 ([0, T ], Rn ).
Let D(H) and R(H) denote the domain and the range of an operator H,
respectively.
With this definition, the identification problem translates into the solution of
the following nonlinear operator equation, which assumes that x̂ is given as the
measured trajectory:
Δ
F (τ, x̂) = H(τ ) − x̂ = 0
Δ
where x̂ ∈ L2 ([0, T ], Rn ), τ = [τ1 , ..., τk ]
(5.7)
The solution of the nonlinear operator equation (5.7) is approached using the
tools of optimization theory, by introducing the cost functional to be minimized
with respect to the unknown variable τ as follows:
Δ
Ψ(τ, x̂) = 0.5 < F (τ, x̂) | F (τ, x̂) >2 = 0.5 k H(τ ) − x̂ k22 ,
for x̂ ∈ L2 ([0, T ], Rn ), τ ∈ Rk
(5.8)
As the reference model satisfies H(τ̂ ) = x̂, the minimum of this cost is zero if all
the measurements are exact.
59
Δ
Suppose further that the Fréchet derivative Y =
∂
F
∂τ
=
∂
H
∂τ
: h → δx, with
h ∈ Rk , δx ∈ L2 ([0, T ]; Rn ), exists for all τ ∈ Rk and that it is continuous as a
function of τ . It is easy to verify that the gradient of the cost functional is given
by
gradτ Ψ(τ, x̂) = Y (τ )∗ F (τ, x̂)
(5.9)
where Y (τ )∗ is the Hilbert adjoint of the operator Y (τ ). A steepest descent procedure could then be used to minimize Ψ, but a better search direction than the
gradient can be derived by seeking an approximate solution to the linearized operator equation (5.7) in the neighbourhood of a current approximation τ n :
H(τ n ) + Y (τ n )(τ n+1 − τ n ) − x̂ = 0
(5.10)
Since x̂ − H(τ n ) may fail to be a member of R(Y (τ n )), a ”least squares” solution
to (5.10) calls for the calculation of
min{k x̂ − H(τ n ) − Y (τ n )(τ n+1 − τ n ) k22 | (τ n+1 − τ n ) ∈ Rk }
(5.11)
Clearly, the argument minimum in the above is delivered by the pseudo-inverse
operator to the Fréchet derivative as follows:
τ n+1 − τ n = Y (τ n )† [x̂ − H(τ n )]
Y (τ n )† = [Y (τ n )∗ Y (τ n )]−1 Y (τ n )∗
(5.12)
that exists whenever the range R(Y (τ n )) is a closed subspace of L2 ([0, T ], Rk ).
The above leads to a generalized Newton iteration:
τ n+1 = τ n − α(τ n )Y (τ n )† F (τ n , x̂)
60
(5.13)
where τ n is the approximation to the delay parameter vector in iteration step n.
Here, α : Rk → (0, +∞) is the step size function used to achieve convergence. The
pseudo-inverse in (5.12) can be computed whenever the operator Y ∗ Y is invertible.
Conditions for this are provided in the sequel and, as expected, are associated
with identifiability of the system that in turn is guaranteed by a certain type of
controllability.
Further development hence hinges on the existence, calculation, and properties of the Fréchet derivative Y (τ ) : Rk → L2 ([0, T ], Rn ) as established below. Although differentiability of solutions to time delayed systems, with respect to initial
conditions as well as perturbations of the right hand side of the system equation,
has already been demonstrated in full generality in [45], p. 49, the relevant result
(Theorem 4.1 p. 49) is not easily interpreted with regard to system (5.1). Hence,
a direct calculation of the derivative is provided in full as it is necessary for the
iterative algorithm (5.13).
In this development, the following auxiliary results are found helpful.
Lemma 5.2.1. For any constant perturbation vector h ∈ Rk , let x(t; τ + h), t ∈
[0, T ] denote the solution of the system (5.1) that corresponds to the delay parameter vector τ + h and the given functions φ and u. Similarly, let x(t; τ ), t ∈ [0, T ]
be the unperturbed trajectory as it corresponds to τ . Under the assumptions made,
there exist constants ρ > 0 and K > 0 such that
k x(∙; τ + h) − x(∙; τ ) kC ≤ K k h k for all h ∈ B(0; ρ)
(5.14)
The proof follows from a version of a general result on continuous dependence
of the solutions of (5.1) on parameters; see [45], Theorem 2.2, p.43.
61
Corollary 5.2.1. Under the assumptions of Lemma 5.2.1, there exists constants
ρ > 0 and K > 0 such that
k
d
x(∙; τ
dt
+ h) ∈ L1 ([0, T ], Rn ) and
d
d
x(∙; τ + h) − x(∙; τ ) k1 ≤ K k h k
dt
dt
for all h ∈ B(0; ρ)
(5.15)
The proof is a direct consequence of Lemma 5.2.1, and assumption A1.
Lemma 5.2.2. Let δ ∈ R and : (t, δ) ∈ R2 → Rn be such that for all δ sufficiently
small, (∙, δ) ∈ L1 ([0, T ], Rn ) and such that
k (∙, δ) k1 → 0 as | δ |→ 0. Consider
the following non-homogenous, linear, delayed equation:
d
z(t) = A0 (t)z(t) + Σkj=1 Aj (t)z(t − τj ) + (t, δ)
dt
z(s) = 0
for − τk ≤ s ≤ 0
(5.16)
where the matrix functions Aj , j = 0, ..., k are continuous and uniformly bounded
in R , i.e. for all t ∈ R, k Aj (t) k≤ M, j = 0, ..., k, for some constant M . Then the
solutions of (5.16), considered as functions of δ, satisfy k z(∙) kC → 0 as | δ |→ 0.
Proof. The solution of the homogeneous equation with zero initial condition
d
z0 (t) = A0 (t)z0 (t) + Σkj=1 Aj (t)z0 (t − τj )
dt
z0 (s) = 0 for − τk ≤ s ≤ 0
62
(5.17)
is clearly z0 (t) ≡ 0, t ∈ [0, T ]. Let Z(t, s) be the fundamental matrix solution for
(5.16), i.e.
∂
Z(t, s) = A0 (t)Z(t, s) + Σkj=1 Aj (t)Z(t − τj , s); t > s,
∂t
Z(t, s) = 0 for s − τk ≤ t < s
and Z(t, s) = I
for t = s
(5.18)
It is well known that, under the conditions stated, such matrix function Z
exists , [45], p. 18, and that the solution of (5.16) is given by the variation of
constant formula
z(t) = z0 (t) +
Z
t
0
Z(t, s)(s, δ)ds, t ∈ [0, T ]
(5.19)
The matrix function Z is continuous for t ≥ s, so let μ > 0 be a constant such
that k Z(t, s)e k≤ μ k e k for all t ∈ [0, T ], t ≥ s, and all vectors e ∈ Rn . Hence,
as z0 (t) ≡ 0, t ∈ [0, T ],
k z(t) k≤
Z
t
0
≤μ
Z
k Z(t, s)(s, δ) k ds
t
0
k (s, δ) k ds, t ∈ [0, T ]
(5.20)
so that
k z(∙) kC ≤ μ k (∙, δ) k1 → 0 as | δ |→ 0
(5.21)
as claimed.
Δ
Proposition 5.2.1. The Fréchet derivative Y =
∂
H
∂τ
exists for all τ ∈ Rk as a
linear and bounded operator: Y : Rk → L2 ([0, T ], Rn ] and is given by a matrix
Δ
function Y (t; τ ), t ∈ [0, T ] whose columns yi (t; τ ) =
63
∂
H(τ ), i
∂τi
= 1, ..., k satisfy the
following equation on the interval t ∈ [0, T ]:
d
d
yi (t) = A0 (t)yi (t) + Σkj=1 Aj (t)yi (t − τj ) + Ai (t) x(t − τi )
dt
dt
yi (s) = 0
for s ∈ [−τk , 0]
(5.22)
where x(s), s ∈ [−τk , T ] is the solution of system (5.1) corresponding to delay
parameter vector τ and the given functions φ and u, and where the matrix functions
Aj , j = 0, ..., k, are given by
Δ
Aj (t) = f|j0 (x(t), x(t − τ1 ), ..., x(t − τk ), u(t))
for t ∈ [0, T ]; j = 1, ..., k + 1
(5.23)
Proof. Under the assumptions made, the solution to (5.22) exists and is unique as
the derivatives on the right hand side of (5.22) are absolutely integrable functions.
For a given constant δ ∈ R, let
Δ
xδ (t) = x(t; τ1 , ..., τi + δ, ..., τk );
Δ
x(t) = x(t; τ1 , ..., τi , ..., τk );
Δ
m(t) = xδ (t) − x(t),
t ∈ [0, T ];
Δ
Δxδ (t − τi ) = xδ (t − τi − δ) − xδ (t − τi ),
Δ
Δx(t − τi ) = x(t − τi − δ) − x(t − τi )
(5.24)
where x(t; τ1 , ..., τi , ..., τk ), t ∈ [0, T ], and x(t; τ1 , ..., τi + δ, ..., τk ), t ∈ [0, T ], denote the trajectories of (5.1) with time delay parameters τ = [τ1 , ..., τk ] and
τ = [τ1 , ..., τi + δ, ..., τk ], respectively.
For any function r(h) : Rn → R, let the statement r(h) = o(k h k) signify that
64
r(h)/ k h k→ 0 as k h k→ 0 (where the dimension n will be clear from the context).
By virtue of assumption A1
f (xδ (t), xδ (t − τ1 ), ..., xδ (t − τi − δ), ..., u(t))
−f (x(t), x(t − τ1 ), ..., x(t − τi ), ..., u(t))
= A0 (t)[xδ (t) − x(t)] + Σkj=1 Aj (t)[xδ (t − τj ) − x(t − τj )]
j6=i
+Ai (t)[xδ (t − τi − δ) − x(t − τi )] + w(t, δ)
(5.25)
where the matrix functions Aj , j = 1, ..., k, are given by (5.23) and the function w
comprises the second order terms in the expansion (5.25), specifically w is of the
form
w(t, δ) = w0 (t, δ) + Σkj=1 wj (t, δ) + wi (t, δ)
(5.26)
j6=i
where the component terms wl , l = 1, ..., k satisfy
k w0 (∙, δ) kC = o(k m(∙) kC )
k wj (∙, δ) kC = o(k m(∙ − τj ) kC ) j 6= i
k wi (∙, δ) kC = o(k m(∙ − τi − δ) + Δx(∙ − τi ) kC )
(5.27)
as xδ (t − τi − δ) − x(t − τi ) = m(t − τi − δ) + Δx(t − τi ). From Lemma 5.2.1 it
follows that there exist constants ρ > 0 and K > 0 such that
k m(∙) kC ≤ K | δ |,
k m(∙ − τj ) kC ≤ K | δ |
k m(∙ − τi − δ) + Δx(∙ − τi ) kC
≤k m(∙ − τi − δ) kC + k Δx(∙ − τi ) kC ≤ K | δ |
65
(5.28)
for all | δ |< ρ. Then, (5.27) implies that
k w(∙, δ) kC
→ 0 as | δ |→ 0
|δ|
(5.29)
The system equation (5.1) can be equivalently re-written in the form; see [45],
pp. 35:
Z
x(t) = φ(0) +
t
f (x(s), x(s − τ1 ), ..., x(s − τk ), u(s))ds
0
for all t ∈ [0, T ]. Hence, for all t ∈ [0, T ],
Z
t
k
Z
t
Aj (t)[xδ (s − τj ) − x(s − τj )]ds
0
0
Z t
Z t
δ
Ai (t)[x (s − τi − h) − x(s − τi )]ds +
w(s, h)ds
+
0
0
Z t
Z t
k
=
A0 (t)m(s)ds + Σ j=1
Aj (t)m(s − τj )ds
j6=i
0
0
Z t
Z t
Ai (t)[m(s − τi − δ) + x(s − τi − δ) − x(s − τi )]ds +
w(s, δ)ds
+
0
0
Z t
Z t
k
=
A0 (t)m(s)ds + Σj=1
Aj (t)m(s − τj )ds
0
0
Z t
Ai (t)[m(s − τi − δ) − m(s − τi )]ds
+
0
Z t
Z t
+
Ai (t)[x(s − τi − δ) − x(s − τi )]ds +
w(s, δ)ds
(5.31)
m(t) =
δ
(5.30)
A0 (t)[x (s) − x(s)]ds + Σ j=1
j6=i
0
0
Also,
yi (t) =
Z
t
A0 (t)yi (s)ds +
0
+
Z
Σkj=1
Z
0
t
Ai (t)
0
d
x(s − τi )ds
ds
66
t
Aj (t)yi (s − τj )ds
(5.32)
for all t ∈ [0, T ]. Then, for all t ∈ [0, T ], define
Δ
z(t) =
m(t) − yi (t)δ
,
δ
Δ
Δm(t − τi ) = m(t − τi − δ) − m(t − τi )
(5.33)
Clearly,
d δ
x (∙ − τi )δ k1 = o(| δ |),
dt
d
k Δx(∙ − τi ) − x(∙ − τi )δ k1 = o(| δ |)
dt
k Δxδ (∙ − τi ) −
and, by virtue of Corollary 5.2.1,
k Δm(∙ − τi ) k1 =k Δxδ (∙ − τi ) − Δx(∙ − τi ) k1
d δ
x (∙ − τi )δ k1
dt
d
+ k Δx(∙ − τi ) − x(∙ − τi )δ k1
dt
d δ
d
+ | δ | k x (∙ − τi ) − x(∙ − τi ) k1 = o(| δ |)
dt
dt
≤k Δxδ (∙ − τi ) −
(5.34)
It then follows from (5.31)-(5.34) that
z(t) =
Z
t
A0 (s)z(s)ds +
Σkj=1
0
Z
0
t
Aj (t)z(s − τj )ds +
Z
t
(s, δ)ds
(5.35)
0
with
d
1
1
Δ 1
(s, δ) = [Δx(s − τi ) − x(s − τi )δ] + Δm(s − τi ) + w(s, δ)
δ
ds
δ
δ
(5.36)
so that satisfies k (∙, δ) k1 → 0 as | δ |→ 0. Thus, the function z satisfies the
differential difference equation:
d
z(t) = A0 (t)z(t) + Σkj=1 Aj (t)z(t − τj ) + (t, δ)
dt
67
(5.37)
with initial condition z(s) = 0 for all −τk ≤ s ≤ 0. It follows from Lemma 5.2.2
that
√
k m(∙, τi , δ) − yi (∙)δ k2
= k z k2 ≤ T k z kC → 0 as | δ |→ 0
|δ|
which proves that the partial Fréchet derivative
∂
H(τ )
∂τi
(5.38)
is indeed given by the
solution of equation (5.22). The total Fréchet derivative is now seen to be given
by Y = [ ∂τ∂1 H(τ ), ..., ∂τ∂k H(τ )] = [y1 , ..., yk ], which follows from the fact that the
partial derivatives are all continuous in τ . The last is a consequence of Lemma
5.2.1 and Corollary 5.2.1. The differential operator Y : h 7→ Y h is clearly linear
and it is bounded as its domain is finite-dimensional.
Remark 5.2.1. It should be noted that the range of the Fréchet derivative R(Y ) is
a finite dimensional subspace of L2 ([0, T ], Rn ) as dim R(Y ) ≤ dim D(Y ) = k and
is thus closed. This makes the projection operator well defined and guarantees that
the minimum of the cost Ψ is achieved. The adjoint operator Y ∗ : R(Y ) → Rk is
calculated as follows
Z
T
hT Y T (s)x(s)ds
0
Z T
T
Y T (s)x(s)ds =< h | Y ∗ x >
=h
< Y h | x >2 =
0
for all h ∈ R , x ∈ L2 ([0, T ], Rn ), so that
Z T
∗
Y x=
Y T (s)x(s)ds, for all x ∈ L2 ([0, T ], Rn )
k
(5.39)
0
Furthermore, Y ∗ Y : Rk → Rk is given by
∗
Y Yh=
Z
T
Y T (s)Y (s)ds h,
0
68
for all h ∈ Rk
(5.40)
It should also be noted that all the above operators depend continuously on
the delay parameter vector τ ∈ Rk . For any given vector τ , invertibility of the
matrix Y ∗ Y (τ ) is guaranteed under the following conditions.
Proposition 5.2.2. The following conditions are equivalent:
(a) the matrix Y ∗ Y (τ ) is non-singular;
(b) the columns, yi (∙; τ ), i = 1, ..., k of Y (∙; τ ) are linearly independent functions
in C([0, T ], Rn );
(c) the exogenous forcing function u is such that the transformed velocities :
Δ
vi (t) = Ai (t) dtd x(t − τi ), i = 1, ..., k, defined a.e. on t ∈ [0, T ], are linearly
independent functions in L1 ([0, T ], Rn ).
Proof. Clearly, the matrix Y ∗ Y (τ ) is invertible if and only if it is positive definite.
The columns yi (∙; τ ), i = 1, ..., k of Y are linearly independent if and only if the
only vector h ∈ Rk that satisfies Y (τ )h = 0 is h = 0. This shows equivalence of
(a) and (b) as
T
∗
h Y Y (τ )h =
Z
T
0
k Y (s; τ )h k2 ds = 0, if and only if h = 0
(5.41)
Now, the columns yi (∙; τ ) are linearly dependent if and only if there exist
constants hi , i = 1, ..., k, not all zero, such that
Δ
z(t) = Σki=1 hi yi (t; τ ) ≡ 0 for all t ∈ [−τk , T ]
69
(5.42)
It follows from (5.22) that z satisfies
d
d
z(t) = Σki=1 hi Σkj=1 Aj (t)yi (t − τj ; τ ) + Σki=1 hi Ai (t) x(t − τi ), t ∈ [0, T ]
dt
dt
d
= Σkj=1 Aj (t)z(t − τj ) + Σki=1 hi Ai (t) x(t − τi ), z(s) = 0 for s ∈ [−τk , 0] (5.43)
dt
Equation (5.42) is equivalent to
d
z(t)
dt
≡ 0 for all t ∈ [−τk , T ] that, by virtue
of (5.43), is further equivalent to
Σki=1 hi Ai
d
x(t − τi ) ≡ 0 for all t ∈ [0, T ]
dt
(5.44)
This establishes equivalence between conditions (b) and (c).
The last result delivers a condition for local identifiability.
Proposition 5.2.3. Suppose that the exogenous input function f is such that the
transformed system velocities, vi (∙; τ ), i = 1, ...k, as defined in Proposition 5.2.2,
(c), are linearly independent for all τ ∈ Rk . Then the system (5.1) is locally
identifiable as specified by Definition 5.1.1. Additionally, for every closed ball
ˉ ; r) ⊂ Rk , r > 0 there exists a constant cr > 0 such that
B(τ̂
ˉ ; r)
k Y (τ )h k2 ≥ cr k h k for all h ∈ Rk , τ ∈ B(τ̂
(5.45)
There also exists a constant 0 < ρ ≤ r such that
k H(τ ) − H(τ̂ ) − Y (τ )h k2 ≤ 0.5cr k h k
(5.46)
ˉ ; ρ)
k H(τ̂ ) − H(τ ) k2 ≥ 0.5cr k τ̂ − τ k for all τ ∈ B(τ̂
(5.47)
70
Proof. By virtue of Proposition 5.2.2, for every τ ∈ Rk , there exists a constant
c(τ ) > 0 such that
k Y (τ )h k2 =k Σki=1 hi yi (∙; τ ) k2 ≥ c(τ ) k h k
for all h = [h1 , ..., hk ] ∈ Rk
(5.48)
(The above is often used as a criterion for linear independence of vectors yi (∙; τ ), i =
1, ..., k.) For every r > 0 the existence of the constant cr in (5.45) follows from
the continuity of the Fréchet derivative Y (τ ) with respect to τ and compactness
ˉ ; r) ⊂ Rk , r > 0.
of B(τ̂
From the definition of the Fréchet derivative Y (τ ), it further follows that there
exists a constant ρ ≤ r such that (5.46) holds. It follows that
k H(τ̂ ) − H(τ ) k2
≥k Y (τ )(τ̂ − τ ) k2 − k H(τ̂ ) − H(τ ) − Y (τ )(τ̂ − τ ) k2
ˉ ; ρ)
≥ 0.5cr k τ̂ − τ k for all τ ∈ B(τ̂
(5.49)
which proves (5.47).
Now, suppose that x(t) = x̂(t) for t ∈ [−τˆk , T ], but that the τ that produces
ˉ ; ρ) be such that τ ∈ B(τ̂
ˉ ; ρ). It follows from
H(τ ) = x is such that τ 6= τ̂ . Let B(τ̂
(5.49) that
0 =k x(∙) − x̂(∙) k2 ≥ 0.5cr k τ − τ̂ k
(5.50)
a contradiction. Hence τ = τ̂ , which proves local identifiability.
Under the conditions of Proposition (5.2.2) the pseudo-inverse operator is well
defined and the generalized Newton iteration for the minimization of Ψ is given
71
by
τ
n+1
Z
T
= τ − α(τ ) ∙ [
Y T (s; τ n )Y (s; τ n )ds]−1 ∙
0
Z T
Y T (s; τ n )[H(τ n )(s) − x̂(s)]ds
n
n
(5.51)
0
where H(τ n )(s), s ∈ [0, T ] is the system trajectory corresponding to the delay parameter vector τ n in iteration step n, for the given input and initial functions u
and φ, and where x̂(s), s ∈ [0, T ] is the measured trajectory.
5.3
Convergence analysis for the delay identifier
The generalized Newton search direction is zero only at the stationary points
of the minimized functional Ψ(τ, x̂) i.e. at points τ ∈ Rk at which
gradτ Ψ(τ, x̂) = Y (τ )∗ F (τ, x̂) = 0
(5.52)
Away from the stationary points, the functional Ψ is decreasing along the
generalized Newton direction, as then
< gradτ Ψ(τ, x̂) | −α(τ )Y (τ )† F (τ, x̂) >
= −α(τ ) < Y ∗ (τ )F (τ, x̂) | [Y (τ )∗ Y (τ )]−1 ∙ Y ∗ (τ )F (τ, x̂) >
<0
(5.53)
The last holds as, under the assumptions of Proposition 5.2.2, the matrix Y ∗ Y
is positive definite, so that its inverse is also positive definite.
The following auxiliary result is needed to demonstrate desirable properties
of the cost functional Ψ that are needed if the latter is to be used as a Lyapunov
72
function in the convergence analysis of the identifier algorithm.
Proposition 5.3.1. Suppose that the identifiability assumption of Proposition
ˉ ; ρ) ⊂ Rk and posi5.2.2, (c), is satisfied. Then, there exists a closed ball B(τ̂
tive constants γr1 , γr2 , γr3 such that
(i) the cost Ψ is a positive definite and decrescent function in the increment
Δ
ˉ ; ρ), i.e. for all τ ∈ B(τ̂
ˉ ; ρ),
h = τ̂ − τ , on the ball B(τ̂
γr1 k τ̂ − τ k2 ≤ Ψ(τ, x̂) ≤ γr2 k τ̂ − τ k2
(5.54)
ˉ ; ρ) in the sense that for all
(ii) the gradient of the cost is non-vanishing on B(τ̂
ˉ ; ρ),
τ ∈ B(τ̂
k gradτ Ψ(τ, x̂) k22 ≥ γr3 k τ̂ − τ k2
(5.55)
Proof. Let ρ be as in Proposition 5.2.3. The existence of γr1 follows directly from
(5.47) as Ψ(τ, x̂) =k H(τ̂ )−H(τ ) k22 . The existence of γr2 follows from the fact that
H is continuously differentiable with respect to τ and hence Lipschitz continuous
ˉ ; ρ).
on any compact set such as B(τ̂
Next, it is easy to verify that for h as defined above,
k H(τ̂ ) − H(τ ) − Y (τ )h k22 =k H(τ̂ ) − H(τ ) k22
+ k Y (τ )h k22 −2 < Y (τ )h | H(τ̂ ) − H(τ ) >2
73
(5.56)
By (5.45)-(5.47),
0.25c2r k h k2 ≥k H(τ̂ ) − H(τ ) − Y (τ )h k22
≥ 0.25c2r k h k2 +0.5c2r k h k2 − < h | Y ∗ (τ )[H(τ̂ ) − H(τ )] >
(5.57)
ˉ ; ρ). It follows that
for all τ ∈ B(τ̂
−0.5c2r k h k2 ≥ − < h | −Y ∗ (τ )F (τ, x̂) >
(5.58)
and hence, by the Schwartz inequality, that
0.5c2r k h k2 ≤ < h | −Y ∗ (τ )F (τ, x̂) >
≤ k h k k gradτ Ψ(τ, x̂) k2
(5.59)
Therefore,
k gradτ Ψ(τ, x̂) k22 ≥ 0.25c4r k h k2
(5.60)
ˉ ; ρ), as required.
for all τ ∈ B(τ̂
The result below now provides the convergence analysis of the identifier algorithm. It considers ”continuous-time” version of the Newton algorithm as described by the solutions of the Newton flow equation.
Theorem 5.3.1. Suppose that the identifiability condition is satisfied and ρ is
defined in Proposition 5.2.3. For some function α : [0, +∞) → [0, +∞), which
is continuous, strictly increasing, and satisfies α(0) = 0, consider the following
74
system for the Newton flow,
Z T
d
Y T (s; τ )Y (s; τ )ds]−1 ∙
τ = −α(k h k)[
dt
0
Z T
Y T (s; τ )[H(τ )(s) − x̂(s)]ds
(5.61)
0
Δ
with h = τ − τ̂ . Under these conditions, there exists a constant δ > 0 such that
ˉ ; δ) is
any solution of (5.61) emanating from an initial condition τ (0) = τ0 ∈ B(τ̂
ˉ ; ρ) for all times
continuable to the interval t ∈ [0, +∞), remains in the ball B(τ̂
t ≥ 0, and converges to τ̂ such that H(τ̂ ) = x̂.
Proof. Local solutions to (5.61) exist by virtue of the Peano Theorem[37], as the
right hand sides are continuous with respect to τ . Continuation of solutions over
the interval [0, ∞) holds in the absence of finite escape times which is shown next.
Δ
Consider Ψ(τ, x̂) as a function in the argument h = τ̂ − τ . For system (5.61)
the derivative of Ψ along the corresponding system trajectories satisfies
d
Ψ(τ, x̂) = −α(k h k)∙ < gradτ Ψ(τ, x̂) | [Y ∗ Y (t, τ )]−1 gradτ Ψ(τ, x̂) >
dt
≤ −α(k h k) λmin {[Y ∗ Y (t, τ )]−1 }∙ k gradτ Ψ(τ, x̂) k22
≤ −α(k h k) λmin {[Y ∗ Y (t, τ )]−1 } ∙ γr3 k τ̂ − τ k2
(5.62)
ˉ ; ρ), where λmin denotes the smallest eigenas long as τ remains in the ball B(τ̂
value of the positive definite matrix [Y ∗ Y ]−1 This, together with Proposition 5.2.3
ˉ ; ρ)) positive definite and decrescent
imply that the cost Ψ(τ, x̂) is a locally (in B(τ̂
Lyapunov function for system (5.61). It then follows from the standard Lyapunov’s direct approach; see e.g. [125], Theorem 5.3.2 on p.165, that there exists
ˉ ; δ), δ ≤ ρ such that the trajectories emanating from any initial
a neighborhood B(τ̂
ˉ ; ρ) for all times t ≥ 0,
conditions τ (0) = τ0 in that neighborhood, remain in B(τ̂
75
and hence cannot have finite escape times. By strict negative definiteness of the
derivative (5.62), all such trajectories must converge to the asymptotically stable
equilibrium τ = τ̂ , as claimed.
Corollary 5.3.1. Suppose that the identifiability condition is satisfied as in Proposition 5.2.3. There exists a function α : Rk → [0, +∞), such that the generalized
Newton identifier algorithm (5.51) is locally convergent to the true delay parameter
vector τ̂ , with H(τ̂ ) = x̂.
The result follows directly from Theorem 5.3.1 by considering the system
(5.61) in discrete time.
5.4
Numerical techniques and examples
The numerical methods in this thesis for solving delay identification of the
nonlinear delayed systems employ on the Time-Delay System Toolbox which was
designed by the Russian Academy of Sciences and Seoul National University[52].
Time-Delay System Toolbox provides support for numerical simulation of linear
and nonlinear systems with discrete and distributed delays. The Runge-KuttaFehlberg-like numerical schemes of the order 4 and 5 are implemented in this
thesis. Before the numerical simulation is proposed, the computational technique
is discussed as follows.
76
5.4.1
Computational technique
In the generalized Newton iteration, calculation of the Fréchet derivative plays
a key role to implement the numerical simulation. The generalized Newton iteration (5.51) is rewritten as
τ M +1 = τ M − α(τ M )Y (τ M )† F (τ M , x̂)
−1
= τ M − α(τ M ) Y (τ M )∗ Y (τ M )
Y (τ M )∗ H(τ M ) − x̂(t)
(5.63)
The calculation of the adjoint operator Y ∗ : R(Y ) → Rk is shown in (5.39).
Δ ∂
F
∂τ
Then the finite difference approximation of Fréchet derivative Y =
=
∂
H
∂τ
is
obtained as follows:
From Lemma 1.1 of p.39 in [45], we have
M
H(τ ) = φ(0) +
Z
t
0
f (x(s), x(s − τ1M ), ..., x(s − τkM ), u(s))ds
and the Fréchet derivative is


∂
M
 ∂τ1 H(τ ) 


 ∂ H(τ M ) 
 ∂τ

Y T (τ M ) =  2 .



.
.




∂
M
H(τ )
∂τk
k×n
∂
∂
∂
= ∂τ H(τ M ) ∂τ H(τ M ) ∙ ∙ ∙ ∂τ H(τ M )
1
2
k
n×k

R
R
R
t
t
t
∂
∂
∂
 ∂τ1 0 f1 (∙)ds ∂τ2 0 f1 (∙)ds ∙ ∙ ∙ ∂τk 0 f1 (∙)ds

R
Rt
..
 ∂ t f2 (∙)ds
.
∙ ∙ ∙ ∂τ∂k 0 f2 (∙)ds
 ∂τ1 0
=
..
..
..
...

.
.
.


Rt
Rt
Rt
∂
f (∙)ds ∂τ∂2 0 fn (∙)ds ∙ ∙ ∙ ∂τ∂k 0 fn (∙)ds
∂τ1 0 n
77
(5.64)









n×k
(5.65)
where we let:
(i)
d
x(t) = f (x(t), x(t − τ1 ) ∙ ∙ ∙ x(t − τk ), u(t))
dt 
 
(t)
(x(t), x(t − τ1 ), ∙ ∙ ∙ x(t − τk ), u(t))
  f1
 x1
 

 
d 
 x2 (t)   f2 (x(t), x(t − τ1 ), ∙ ∙ ∙ x(t − τk ), u(t))
⇒  . =
..
dt  ..  
.
 

 

xn (t)
fn (x(t), x(t − τ1 ), ∙ ∙ ∙ x(t − τk ), u(t))









(5.66)
n×1
(ii)
fj (∙) = fj (x(s), x(s − τ1M ) ∙ ∙ ∙ x(s − τkM ), u(s)), j = 1, ..., n
(iii) Similar to (5.24), we define:
Δ
(a.) xδj (t; τi + δ) = xj (t; τ1 , . . . , τi + δ, . . . , τk ) = xj (t|τi + δ)
Δ
(b.) xj (t) = xj (t; τ1 , . . . , τi , . . . , τk )
Δ
(c.) mj (t) = xδj (t) − xj (t), t ∈ [0, T ]
78
(5.67)
For the j × i-th element of the matrix function Y T (τ M ), is derived as:
∂
∂τi
Zt
0
=
Zt
fj (x(s), x(s − τ1M ), ∙ ∙ ∙ , x(s − τiM ) ∙ ∙ ∙ x(s − τkM ), u(s))ds
∂
f (x(s), x(s
∂τi j
0
Zt
− τ1M ), ∙ ∙ ∙ , x(s − τiM ) ∙ ∙ ∙ x(s − τkM ), u(s))ds
fj (x(s), x(s − τ1M ), ∙ ∙ ∙ , x(s − (τiM + h)) ∙ ∙ ∙ x(s − τkM ), u(s))
h→0
h
0
fj (x(s), x(s − τ1M ), ∙ ∙ ∙ , x(s − τiM ) ∙ ∙ ∙ x(s − τkM ), u(s))
ds
−
h
1
= lim (mj (t))
h→0 h
1
= lim (xj (t|τi + h) − xj (t))
(5.68)
h→0 h
=
lim
So the iteration (5.63) can be written as follows:
τ M +1 = τ M − α(τ M )Y (τ M )† F (τ M , x̂)
Z T
Z
M
M
M
−1
= τ − α(τ )[
A(τ )dt]
0
T
B(τ M )dt
(5.69)
0
where A(τ M ) =









∂
∂τ1
∂
∂τ1
∂
∂τ1
Rt
0
Rt
0
Rt
0
f1 (∙)ds
f2 (∙)ds
..
.
fn (∙)ds
∙∙∙
∂
∂τk
Rt
f1 (∙)ds
T


∙ ∙ ∙ ∂τ∂k 0 f2 (∙)ds 


..
..

.
.


R
t
∂
∙ ∙ ∙ ∂τk 0 fn (∙)ds
0
Rt
79
k×n





∙



∂
∂τ1
∂
∂τ1
∂
∂τ1
Rt
0
Rt
0
Rt
0
f1 (∙)ds
f2 (∙)ds
..
.
fn (∙)ds
∙∙∙
∂
∂τk
Rt
f1 (∙)ds



∙ ∙ ∙ ∂τ∂k 0 f2 (∙)ds 


..
..

.
.


R
t
∂
∙ ∙ ∙ ∂τk 0 fn (∙)ds
0
Rt
n×k

x1 (t|τ1M
+ h) − x1 (t)


 x2 (t|τ M + h) − x2 (t)
1
1 
= h
..

.


xn (t|τ1M + h) − xn (t)

x1 (t|τ1M
+ h) − x1 (t)


M
1
 x2 (t|τ1 + h) − x2 (t)
∙ 
..
h
.


xn (t|τ1M + h) − xn (t)

x1 (t|τ1M
+ h) − x1 (t)


 x1 (t|τ M + h) − x1 (t)
2

= h1 
..

.


x1 (t|τkM + h) − x1 (t)

x1 (t|τ1M
+ h) − x1 (t)


M
1
 x2 (t|τ1 + h) − x2 (t)
∙ 
..
h
.


xn (t|τ1M + h) − xn (t)
80
∙∙∙
x1 (t|τkM
∙∙∙
..
.
x2 (t|τkM
∙ ∙ ∙ xn (t|τkM
∙∙∙
x1 (t|τkM
∙ ∙ ∙ x2 (t|τkM
..
.
∙ ∙ ∙ xn (t|τkM
∙∙∙
xn (t|τ1M
∙ ∙ ∙ xn (t|τ2M
..
.
∙ ∙ ∙ xn (t|τkM
∙∙∙
x1 (t|τkM
∙ ∙ ∙ x2 (t|τkM
..
.
∙ ∙ ∙ xn (t|τkM
+ h) − x1 (t)
T


+ h) − x2 (t) 


..

.


+ h) − xn (t)
k×n

+ h) − x1 (t) 

+ h) − x2 (t) 


..

.


+ h) − xn (t)
n×k

+ h) − xn (t) 

+ h) − xn (t) 


..

.


+ h) − xn (t)
+ h) − x1 (t)
k×n



+ h) − x2 (t) 


..

.


+ h) − xn (t)
n×k
and B(τ M ) =

x1 (t|τ1M


 x2 (t|τ M
1
1 
h 



xn (t|τ1M

(t|τ M
 x1 1

 x (t|τ M
2
1  1
= h



x1 (t|τkM
+ h) − x1 (t)
∙∙∙
+ h) − x2 (t)
..
.
∙∙∙
..
.
+ h) − xn (t)
∙∙∙
+ h) − x1 (t)
∙∙∙
+ h) − x1 (t)
..
.
∙∙∙
..
.
+ h) − x1 (t)
∙∙∙
x1 (t|τkM
+ h) − x1 (t)
T






x2 (t|τkM + h) − x2 (t) 


∙


..


.




M
xn (t|τk + h) − xn (t)
 k×n 
M
xn (t|τ1 + h) − xn (t) 



M


xn (t|τ2 + h) − xn (t) 

∙

..


.




M
xn (t|τk + h) − xn (t)
k×n
M
x1 (τ ) − x̂1 (t)


x2 (τ M ) − x̂2 (t) 


..

.


M
xn (τ ) − x̂n (t)
 n×1
M
x1 (τ ) − x̂1 (t) 

x2 (τ M ) − x̂2 (t) 


..

.


M
xn (τ ) − x̂n (t)
In order to illustrate the calculation of the above, we demonstrate an example
as follows:
We consider a two-dimensional nonlinear multiple-delay system expressed as:
d
x1 (t) = f1 (x(t), x(t − τ1 ), x(t − τ2 ), u(t))
dt
d
x2 (t) = f2 (x(t), x(t − τ1 ), x(t − τ2 ), u(t))
dt
The delay identification iteration is:
−1
τ M +1 = τ M − α(τ M ) Y (τ M )∗ Y (τ M )
Y (τ M )∗ H(τ M ) − x̂(t)
Z T
−1 Z T
M
M
M
A(τ )dt
B(τ M )dt
= τ − α1 (τ )
0
0
where


τM = 
τ1M
τ2M



81

n×1
A(τ M )

=

=

∂
∂τ1
Rt
0
Rt
∂
∂τ1 0
f1 (∙)ds
f2 (∙)ds
∂
∂τ2
Rt
0
Rt
∂
∂τ2 0
f1 (∙)ds
f2 (∙)ds
T

2×2


∂
∂τ1
Rt
0
Rt
∂
∂τ1 0
x1 (t|τ1M + h) − x1 (t)
x1 (t|τ2M + h) − x1 (t)
x2 (t|τ1M + h) − x2 (t)
x2 (t|τ2M + h) − x2 (t)
∂
∂τ2
f1 (∙)ds
0
Rt
∂
∂τ2 0
f2 (∙)ds

T

2×2
 (x (t|τ M + h) − x (t))2 + (x (t|τ M + h) − x (t))2
1
1
2
2

1
1


=
 (x (t|τ M + h) − x (t)) ∙ (x (t|τ M + h) − x (t))

1
1
1
1
1
2

M
M
+(x2 (t|τ1 + h) − x2 (t)) ∙ (x2 (t|τ2 + h) − x2 (t))
Rt
∙
f1 (∙)ds
f2 (∙)ds


2×2
x1 (t|τ1M + h) − x1 (t)
x1 (t|τ2M + h) − x1 (t)
x2 (t|τ1M + h) − x2 (t)
x2 (t|τ2M + h) − x2 (t)
(x1 (t|τ1M
+ h) − x1 (t)) ∙
+(x2 (t|τ1M
(x1 (t|τ2M
+ h) − x2 (t)) ∙
+ h) − x1 (t))
(x2 (t|τ2M
+ h) − x2 (t))
(x1 (t|τ2M + h) − x1 (t))2 + (x2 (t|τ2M + h) − x2 (t))2


2×2









2×2
and

B(τ M ) = 

=
5.5
x1 (t|τ1M + h) − x1 (t)
x1 (t|τ2M + h) − x1 (t)
x2 (t|τ1M
x2 (t|τ2M
+ h) − x2 (t)
+ h) − x2 (t)
T

2×2


x1 (τ M ) − x̂1 (t)
x2 (τ
M
) − x̂2 (t)


2×1
(x1 (t|τ1M + h) − x1 (t)) ∙ (x1 (τ M ) − x̂1 (t)) + (x2 (t|τ1M + h) − x2 (t)) ∙ (x2 (τ M ) − x̂2 (t))
(x1 (t|τ2M + h) − x1 (t)) ∙ (x1 (τ M ) − x̂1 (t)) + (x2 (t|τ2M + h) − x2 (t)) ∙ (x2 (τ M ) − x̂2 (t))
Numerical examples and discussion
In this section, the algorithm is implemented first using some representative
examples in the research of Banks et al. [8]. Then a couple of bio-scientific examples are simulated and the results are discussed.
Example 5.5.1:[8]
In the first example, a problem of a nonlinear pendulum with delayed damping is
considered. The system may be expressed as:
ẍ(t) = k ẋ(t − τ ) +
82
g l
sin x(t) = 0
(5.70)


2×1
where τ is the parameter to be identified and the initial condition is:
x(Θ) = 1, Θ ≤ 0,
ẋ(Θ) = 0, Θ ≤ 0.
Equation (5.70) can be rewritten in state-space form with x1 (t) = x(t) and
x2 (t) = ẋ1 (t) as follows:
d
x1 (t) = x2 (t)
dt
g d
sin(x1 (t))
x2 (t) = −kx2 (t − τ ) −
dt
l
(5.71)
(5.72)
with the initial condition: x1 (Θ) = 1, Θ ≤ 0, and x2 (Θ) = 0, Θ ≤ 0.
The true values in this example are τ̂ = 2, k̂ = 4, and gl = 9.81 and initial
guess value is τ 0 = 2.5. Table 5–1 shows the simulation result. The convergence
of the algorithm is illustrated in Figure 5–1.
M
2
4
8
τM
2.0291
2.0023
2.0000
Table 5–1: Parameter values in Example 5.5.1
Example 5.5.2:[8]
The example considered next is a nonlinear nonautonomous multiple-delay system.
The system is modeled as:
ẋ(t) = −tx(t) + 2x(t − τ1 ) +
83
3x(t − τ2 )
K + x(t − τ2 )
(5.73)
Example 4.1 Table-1 of Banks 1983
2.5
delay
2.45
2.4
2.35
Delay value
2.3
2.25
2.2
2.15
2.1
2.05
2
0
1
2
3
4
5
Iteration Number M
6
7
8
Figure 5–1: Delay Identification for the Example 5.5.1 τ̂ =2 and initial delay τ 0
=2.5
with initial condition,
x(Θ) =



−mΘ, − 2 ≤ Θ ≤ 0,

 20 + mΘ, − 4 ≤ Θ ≤ −2.
In this example, the true values are τ̂1 = 1, τ̂2 = 2, K = 10, m = 5. For initial
guess delays τ10 = 0.5, τ20 = 2.5, the result of the delay identification is shown in
Table 5–2. Figure 5–2 is the iteration plot for the proposed algorithm.
M
2
4
8
10
16
19
τ1M
0.9011
0.9664
0.9975
0.9995
1.0000
1.0000
τ2M
2.5970
2.5312
2.0230
2.0071
2.0005
2.0000
Table 5–2: Parameter value in Example 5.5.2
84
Example 4.2 of Banks 1983
3
delay1
delay2
2.5
Delay value
2
1.5
1
0.5
0
2
4
6
8
10
12
Iteration Number M
14
16
18
20
Figure 5–2: Delay Identification for the Example 5.5.2 τ̂1 =1, τ̂2 =2 and initial
delays τ10 =0.5, τ20 =2.5
Example 5.5.3:[8]
A multiple-delay equation with nonlinearity is finally considered.
ẋ(t) = −1.5x(t) − 1.25x(t − τ1 ) + cx(t − τ2 ) sin x(t − τ2 )
(5.74)
x(Θ) = 10Θ + 1, Θ ≤ 0.
True values for this example are τ̂1 = 1, τ̂2 = 2, ĉ = 1. The estimate of the
delays with the start-up values τ10 = 1.4, τ20 = 2.2 is considered. Figure 5–3 represents a quick and precise estimation for both delays.
5.5.1
Numerical examples in Bioscience
Motivated by the requirement to model problems which are in the real-life,
bioscience is the most appropriate area to study. Since delays occur naturally in
85
τ1M
1.4
1.2170
1.1337
1.0186
1.0013
1.0000
M
0
1
2
4
6
9
τ2M
2.2
2.1176
2.0589
2.0068
2.0004
2.0000
Table 5–3: Parameter values in Example 5.5.3
Example 4.4 of Banks 1983
delay1
delay2
Delay value
2
1.5
1
0
1
2
3
4
5
Iteration Number M
6
7
8
9
Figure 5–3: Delay Identification for the Example 5.5.3 with true values τ̂1 =1, τ̂2
=2 and initial values τ10 =1.4, τ20 =2.2
86
biological systems, mathematical modeling with delay differential difference equations is widely used for analysis and predictions in various areas of the bioscience,
such as epidemiology, immunology, population dynamics, physiology, and neural
networks.[5, 62, 81, 82] The reason to introduce delays in such models is that
delay differential equations have a more complete mathematical representation
(compared with ordinary differential equations) for the study of biological systems
and demonstrate better consistency with the nature of the biological processes.
The time delays in these models consider a dependence of the present state of the
modeled system on its past history. The delay can be related to the duration of
certain processes such as the stages of the life cycle, the time between infection
of a cell and the production of new viruses, the duration of the infectious period,
the immune period, and so on. A recent review on differential difference systems
in bioscience can be found in [5, 16] and in their references.
In this section, two biological examples are considered by using the delay
identification approach discussed in this chapter. Example 5.5.4 [117] involves a
drug treatment strategies aiding the humoral immune system which is a nonlinear differential difference system with a single delay. Example 5.5.5 [78] concerns
a glucose-insulin regulatory system represented by a two-dimensional state space
model with two delays.
Example 5.5.4:[117] (Disease Dynamics for Haemophilus influenzae)
The system model for Haemophilus influenzae is as follows:
The Antigen Rate Equation:
(A(t) − Aeq )B(t)
dB(t)
= a1 B(t) − w
− bB 2 (t) − βB(t)û(t)
dt
d + B(t)/η + A(t) − Aeq
87
(5.75)
The Antibody Rate Equation:
A(t)
A(t − τ )B(t − τ )
dA(t)
1 − ∗ × 1+ (t − τ )
=ρ
dt
η(r + A(t − τ ) + B(t − τ )/η)
A
(A(t) − Aeq )B(t)
−w
− a2 (A(t) − Aeq )
η(d + B(t)/η + A(t) − Aeq )
(5.76)
where 1+ (t − τ ) = 1 for t ≥ τ and 0 for t < τ , and B(t) and A(t) represent antigen
(bacteria) and antibody concentrations in the blood, respectively.
The delay terms in this system compensate for the activation, proliferation,
and differentiation times of the T-helper and B cells, which play an important role
in establishing and maximizing the capabilities of the immune systems.
The parameters for the Example 5.5.4 are given in Table 5–4.
parameter
a1
b
a2
η
w
d
ρ
r
β
A∗
Ae q
τ
A(0)
B(0)
value
0.09
2.25 × 10−5
1.54 × 10−3
4.12 × 1013
0.237
0.158
8 × 1011
8 × 10−5
0.0013
1500
1.455 × 10−7
48
1.455 × 10−7
1.67 × 10−3
unit
h−1
ml/cf u/h
h−1
cf u/μg
h−1
μg/ml
h−1
μg/ml
ml/μg/h
μg/ml
μg/ml
h
μg/ml
cf u/ml
Table 5–4: Parameters in Example 5.5.4
Simulation Results and Discussion:
88
Case 1.1
In this case, the delay for the model system in Example 5.5.4 is τ̂ = 48. Two
initial guess delays, τ = 40 and 56, are employed. The delay identifier algorithm
with stop condition kτ M +1 − τ M k2 ≤ 10−6 is applied and a constant step size is
chosen to be α=0.75; see Equation (5.61). Figure 5–4 shows both of the initial
guess delays need only five iterations to reach the desired tolerance.
In Table 5–5, the result of the delay parameter estimation is shown for Example 5.5.4.
56
τ0 = 40
0
τ = 56
54
52
Delay value
50
48
46
44
42
40
38
0
0.5
1
1.5
2
2.5
3
Iteration Number M
3.5
4
4.5
5
Figure 5–4: Delay identification for the disease dynamics of Haemophilus influenzae with the true delay τ̂ =48 and initial delays τ 0 =40 and τ 0 =56 with constant
step size α =0.75.
Case 1.2
In Figure 5–5, the step size α is changed from 0.75 to 0.25. It is noted that
the iteration is converging in a much smoother fashion with a smaller step size,
but takes more steps to reach the desired tolerance. For initial guess delay τ 0 =
89
Iteration(M)
0
1
2
3
4
5
τM
40
45.0263
44.5405
47.9963
47.9991
47.9998
τM
56
39.2397
44.5296
47.3615
47.9992
47.9998
Table 5–5: Parameter values of Case 1.1 in Example 5.5.4
40, the number of iteration needed to converge is 34 steps and for τ 0 = 56 is 20
steps as shown in Figure 5–5.
56
τ0 = 40
τ0 = 56
54
52
Delay value
50
48
46
44
42
40
0
5
10
15
20
Iteration Number M
25
30
35
Figure 5–5: Delay identification for the disease dynamics of Haemophilus influenzae with the true delay τ̂ =48 and initial guess delays τ 0 =40 and τ 0 =56 with
step size α =0.25.
Case 1.3
Figure 5–6 compares the effect of the step size on the convergence of the proposed delay identifier. In this case, the delay for the model system is τ̂ =48. The
initial guess value for the delay parameter is τ 0 =40. With the step size chosen to
90
be α = 0.25 and 2.1 respectively, Figure 5–6 shows that the delay identifier with
α = 0.25 converges in 34 steps but the delay identifier with α = 2.1 could not
converge to any specific value.
55
α = 2.1
α = 0.25
50
Delay value
45
40
35
30
25
20
0
5
10
15
20
Iteration Number M
25
30
35
Figure 5–6: Delay identification for the disease dynamics of Haemophilus influenzae with the true delay τ̂ =48 and initial delay τ 0 =40 with α =0.25 and 2.1.
Example 5.5.5:[78] (Glucose-Insulin Regulatory System with Two Delays)
The system model for Glucose-Insulin Regulatory System is as follows:
dG(t)
= Gin − f2 (G(t)) − f3 (G(t))f4 (I(t)) + f5 (I(t − τ2 ))
dt
dI(t)
= f1 (G(t − τ1 )) − di I(t)
dt
(5.77)
(5.78)
where the initial condition are I(0) = 1 > 0, G(0) = 1 > 0, G(t) ≡ G0 for
t ∈ [−τ1 , 0] and I(t) ≡ I0 for t ∈ [−τ2 , 0], τ1 , τ2 > 0.
91
The functions, fi , i = 1, 2, 3, 4, 5, are determined in Tolic et al.[121].
f1 (G) = Rm /(1 + exp((C1 − G/Vg )/a1 ))
f2 (G) = Ub (1 − exp(−G/(C2 Vg )))
f3 (G) = G/(C3 Vg )
f4 (I) = U0 + (Um − U0 )/(1 + exp(−β ln(I/C4 (1/Vi + 1/(Eti )))))
f5 (I) = Rg /(1 + exp(γ(I/Vp − C5 )))
The parameters for the Example 5.5.5 are shown in Table 5–6.
parameter
Vg
Rm
a1
C1
Ub
C2
C3
Vp
Vi
E
U0
Um
β
C4
Rg
γ
C5
ti
value
10
210
300
2000
72
144
1000
3
3
0.2
40
940
1.77
80
180
0.29
26
100
unit
l
mU min−1
mgl−1
mgl−1
mg min−1
mgl−1
mgl−1
l
l
l min−1
mg min−1
mg min−1
mU l−1
mg min−1
lmU −1
mU l−1
min
Table 5–6: Parameters in Example 5.5.5
92
Simulation Results and Discussion:
Case 2.1 In this case, the delays for the model system are τ̂1 = 7 and τ̂2 = 12.
The correspondent initial guess values for the delay parameters are τ10 = 10 and
τ20 = 9. The delay identifier algorithm with stop condition kτ M +1 − τ M k2 ≤ 10−6
is applied and the step size is chosen to be α = 0.75. Figure 5–7 shows both of
the delay parameters are converged in 7 steps within a satisfied tolerance of 0.001.
The result of the delay parameter estimation for Example 5.5.5 is shown in Table
5–7.
14
τ0 = 10
1
τ0 = 9
13
2
12
Delay value
11
10
9
8
7
6
5
0
1
2
3
4
Iteration Number M
5
6
7
Figure 5–7: Delay identification for the Glucose-Insulin regulatory system with
the true delays τ̂1 = 7, τ̂2 = 12, initial guess delays τ10 = 10, τ20 = 9, and step size
α =0.75.
Case 2.2
In Figure 5–8, most of the parameters are as same as those in the Case 2.1
except the step size function α is changed from 0.75 to 0.25. The same behaviour
93
Iteration(M)
0
1
2
3
4
5
6
7
τ1M
10
7.5160
7.1220
7.0301
7.0075
7.0019
7.0005
7.0001
τ2M
9
11.5358
11.8927
11.9737
11.9935
11.9984
11.9996
11.9999
Table 5–7: Parameter values for Ccse 2.1 in Exampe 5.5.5
as in Case 1.2 is observed, the smaller the step size is, the smoother is the convergence plot but the rate of convergence is slow.
14
0
τ1 = 10
0
τ2 = 9
13
12
Delay value
11
10
9
8
7
6
5
0
5
10
15
Iteration Number M
20
Figure 5–8: Delay identification for the Glucose-Insulin regulatory system with
the true delays τ̂1 = 7 and τ̂2 = 12 and initial guess delays τ10 = 10 and τ20 = 9
with step size α =0.25.
Case 2.3
In this case, τ̂1 = 7 and τ̂2 = 36 are assigned. The selected initial guess delay
parameters are τ10 = 9 and τ20 = 26. Figure 5–9 depicts the convergence of the
94
algorithm to the true values of the delays. The results of the delay parameter
estimation for Case 2.3 is shown in Table 5–8
45
τ0 = 9
1
τ0 = 26
2
40
35
Delay value
30
25
20
15
10
5
0
1
2
3
4
Iteration Number M
5
6
7
8
Figure 5–9: Delay identification for the Glucose-Insulin regulatory system with
the true delays τ̂1 = 7, τ̂2 = 36, initial guess delays τ10 = 8, τ20 = 26, and step size
α =0.75.
Iteration(M)
0
1
2
3
4
5
6
7
8
τ1M
9
4.5981
6.2882
6.8147
6.9541
6.9885
6.9971
6.9993
6.9998
τ2M
26
36.0321
36.1601
36.0492
36.0128
36.0032
36.0008
36.0002
36.0001
Table 5–8: Parameter values for Case 2.3 in Example 5.5.5
95
Case 2.4
In this case, the initial guess delay τ10 is varied from 19, 17, and 15.5 and
τ20 = 18 fixed. Figure 5–10 shows the effect on τ2M by changing τ10 . The true
delays for the model system are (τ̂1 , τ̂2 ) = (20, 25) and step size α = 0.75.
35
0
τ1 = 19
0
τ1 = 17
0
τ1 = 15.5
30
Delay value
25
20
15
10
0
1
2
3
4
5
Iteration Number M
6
7
8
9
Figure 5–10: Delay identification for the Glucose-Insulin regulatory system with
true delays τ̂1 = 20, τ̂2 = 25 ,initial guess delays τ10 = 19, 17, 15.5, τ20 = 18, and
step size α =0.75.
Case 2.5
The delay identifier with different step sizes α = 0.75, 0.5, and 0.25 tested in
this case. The rest of the parameters are (τ̂1 , τ̂2 ) = (20, 25), (τ10 , τ20 ) = (17, 18).
In Figure 5–11, with the larger step, the faster is the convergence, e.g., it takes
only 9 steps to reach the desired value with α = 0.75 but it needs 29 steps for α
= 0.25 to satisfy the same error tolerance. However, with a smaller step size, the
amplitudes of the overshoot in the first estimation of the delay parameters (i.e.
τ11 , τ21 ) are much smaller which possibly implies that the feasible area for the initial
96
guess delay parameters (τ10 , τ20 ) could be larger.
30
α = 0.75
α = 0.5
α = 0.25
28
26
Delay value
24
22
20
18
16
14
0
5
10
15
Iteration Number M
20
25
30
Figure 5–11: Delay identification for the Glucose-Insulin regulatory system with
the true delays τ̂1 = 20, τ̂2 = 25, initial guess delays τ10 = 17, τ20 = 18, and step
size α =0.75, 0.5, and 0.25.
97
CHAPTER 6
Model Predictive Control of Linear Time Delayed Systems[93, 91]
6.1
Introduction
This chapter concerns the design of model predictive control, also known as
receding horizon control (RHC), for time delayed systems. In this chapter we provide a simple constructive method for the design of the optimal cost function in
the receding horizon control for general time delayed systems. Once the system
can be stabilized and a Lyapunov function can be found, the design is straightforward and applies to systems with an arbitrary number of system delays. The
contributions of this chapter are listed as follows:
1. a simple constructive procedure for the design of the open loop cost penalties
on the terminal state is given which is suitable for an arbitrary number of
system delays.
2. a clear association is made between the stabilizability of the system and the
existence of stabilizing receding horizon control law.
3. it is proved rigorously that the receding horizon strategy guarantees global,
uniformly asymptotic stabilization of the differential difference system with
an arbitrary number of system delays.
4. it is also shown how the receding horizon control gains can be computed in
terms of the solution of the Riccati equation.
98
The chapter is organized as follows: the problem statement and notation are
presented in Section 6.2, followed by sufficient conditions for successful control design in Section 6.3. The stabilizing property of the receding horizon control law is
stated in Section 6.4. In Section 6.5, the computational technique of the receding
horizon control law is provided. In Section 6.6, the sensitivity of the receding horizon control law with repect to perturbations in the delay values is investigated.
The efficiency of the resulting methodology is further demonstrated using several
examples in Section 6.7.
6.2
Problem statement and notation
For any subinterval [a, b] ∈ R, let C([a, b], Rn ) be the Banach space of con-
tinuous vector functions f : [a, b] → Rn with the usual norm k ∙ kC defined by
Δ
k f kC = sups∈[a,b] k f (s) k. Similarly, let C 1 ([a, b], Rn ) denote the class of continuously differentiable vector functions on [a, b].
The class of linear time-delay systems considered here is restricted to systems
whose models are given in terms of differential difference equations of the form
d
x(t) = Ax(t) + Σki=1 Ai x(t − τi ) + Bu(t)
dt
(6.1)
where x(t) ∈ Rn , u(t) ∈ Rm is a continuous function that represents an exogenous
input, 0 < τ1 < τ2 < ... < τk are time delays, and the system matrices A, Ai ∈
Rn×n , i = 1, ..., k and B ∈ Rn×m are constant. Let any continuously differentiable
function φ ∈ C 1 ((−τk , 0], Rn ) which satisfies limt→0− φ̇(t) = φ̇(0−) serve as the
initial condition for system (6.1), so that
−τk ≤ s ≤ 0
x(s) = φ(s),
99
(6.2)
Introducing the usual symbol xt to signify a function xt ∈ C([t − τk , t], Rn ) such
that xt , x(t + s) for all s ∈ [−τk , t], the initial condition can be stated simply
as x0 = φ. (Under the above conditions, there exists a unique solution to (6.1)
defined on [−τk , +∞) that coincides with φ on the interval [−τk , 0]; see [45], p.14.
The design of a receding horizon (RH) feedback control relies on an adequate definition of an open loop cost function, here denoted by: J(xt0 , u, t0 , tf ) on
some suitably chosen time horizon [t0 , tf ]. The receding horizon control is then
constructed as follows. Given that the full system state xt is accessible for measurement at any given time instant t, the RH feedback control at t is computed as
the value u∗ (t) where u∗ solves the optimal control problem
minu J(xt , u, t, t + T )
i.e. u∗ = arg minu J(xt , u, t, t + T )
(6.3)
with a given, fixed horizon T > 0. Due to the time invariant nature of the system,
the class of minimizing controls can be restricted to the class of piece-wise continuous functions and u∗ is clearly a function of the state xt ∈ C([t − τk , t], Rn ),
but is invariant with respect to the time t, so that the optimal controls that minimize J(xt , u, t, t + T ) and J(xt , u, t + σ, t + σ + T ) are the same. The existence
and uniqueness of solutions to (6.3) for linear time-invariant systems is normally
ascertained by selecting a quadratic cost index J, thereby rendering the optimal
control problem convex.
The advantage of the RH control design approach is two-fold: unlike alternative methods, it is capable of robust stabilization of the system while simultaneously allowing to restrict the control effort required to achieve this task. While it
100
is possible to secure stabilization and any given hard constraints on the control by
introducing those constraints directly into the optimal control problem (6.3) and
additionally requesting that xt+T = 0, this is impractical as optimization problems
with constraints are computationally much harder to solve. Hence, stabilization
is normally achieved by penalization for large terminal states, while limited magnitude of actuation is encouraged by a similar penalization for the control effort.
Such practical open loop RH cost formulation, hence takes the familiar form
J(xt0 , u, t0 , tf ) ,
Z
tf
[xT (s)Qx(s) + uT (s)Ru(s)]ds
t0
T
+x (tf )F0 x(tf ) +
k Z
X
i=1
tf
xT (s)Fi x(s)ds
(6.4)
tf −τi
To secure a specific level of closed loop system performance, the designer might
need to impose a priori the penalty matrices Q and R. The problems to be
considered here are hence listed as follows:
(P1) For given, constant positive definite penalty matrices Q > 0 and R >
0 determine conditions under which there exist constant, positive definite
terminal cost penalties Fi , i = 0, 1, ..., k such that the resulting RH control
is asymptotically stabilizing.
(P2) Demonstrate computational feasibility of the RH control.
These are addressed in the subsequent sections.
6.3
Sufficient conditions for successful control design
It should be clear that for the existence of a stabilizing receding horizon con-
trol strategy, the system (6.1) must be stabilizable in some sense. The main result
101
of this section is the statement of a stabilizability condition followed by a constructive procedure for the design of the receding horizon cost penalties Fi , i = 0, 1, ..., k,
given that the remaining penalties Q and R are imposed a priori. The construction is next shown to guarantee that the resulting receding horizon control law is
stabilizing.
The desired result and subsequent construction necessitates some facts from
stability theory of differential difference systems. The result cited below is a known,
computationally feasible criterion for stabilizability of system (6.1); see [18], p. 37,
for a proof.
Theorem 6.3.1. If there exist symmetric, positive definite matrices X > 0, Ui >
0, i = 1, ..., k as well as some matrices Y, Yi , i = 1, ..., k such that the following
matrix is negative definite

 ϑ0 (X, Y ) ϑ1 (X, Y ) ∙ ∙ ∙ ϑk (X, Y )

 ϑ1 (X, Y )T −U1
0

 .
...
 .
 .

−Uk
ϑk (X, Y )T 0
where





<0



ϑ0 (X, Y ) , AX + XAT + BY + Y T B T +
k
X
Ui
(6.5)
i=1
ϑi (X, Y ) , Ai X + BYi , i = 1, ..., k
(6.6)
then system (6.1) is asymptotically stable under the control
u(t) , K0 x(t) +
k
X
i=1
102
Ki x(t − τi )
(6.7)
with the control gain matrices defined by
K0 , Y X −1 , Ki , Yi X −1 , i = 1, ..., k
(6.8)
Under the conditions of Theorem 6.3.1, it is easy to construct a Lyapunov
function for the closed loop system (6.1) with control (6.7) now given by:
d
x(t) = [A + BK0 ]x(t) + Σki=1 [Ai + BKi ]x(t − τi )
dt
(6.9)
Denoting Aˉ , A + BK0 , Aˉi , Ai + BKi , i = 1, ..., k ,the standard procedure
for such construction relies on the following result concerning delay-independent
stability; again see [18], p.37.
Theorem 6.3.2. If there exists a set of symmetric, positive definite matrices P >
0, Qi > 0, i = 1, ..., k satisfying the LMI system: Θ(A, A1 , ..., Ak ) < 0, with


 θ0 (P, Q1 , ..., Qk ) θ1 (P ) ∙ ∙ ∙ θk (P ) 



 θ1 (P )T
−Q
0
1


Θ, .

...

 ..




T
0
−Qk
θk (P )
with
θ0 (P, Q1 , ..., Qk ) , A P + P A +
T
k
X
Qi
i=1
θi (P ) , P Ai , i = 1, ..., k
then the system (6.1) with u(t) ≡ 0 is (uniformly) asymptotically stable.
103
(6.10)
The proof relies on the construction of a Lyapunov functional of the form
V (xt ) , x (t)P x(t) +
T
k Z
X
i=1
t
xT (s)Qi x(s)ds
(6.11)
t−τi
An easy calculation confirms that the time derivative of V along the solutions
of system (6.1) with u ≡ 0 is given by
dV (xt )
= ẋT (t)P x(t) + xT (t)P ẋ +
dt
k
k
X
X
xT (t)(
Qi )x(t) −
xT (t − τi )Qi x(t − τi )
i=1
= xT (t)(AT P + P A +
i=1
k
X
Qi )x(t) +
i=1
2
=
k
X
T
x (t −
i=1
T
ηt Θηt
τi )ATi P x(t)
−
<0
k
X
i=1
xT (t − τi )Qi x(t − τi )
(6.12)
where ηtT [xT (t), xT (t − τ1 ), ..., xT (t − τk )].
Remark 6.3.1. It should be noted that the above stability condition is restrictive
as it does not depend on the delays.
In view of the above theorem, system (6.9) is stable if it can be shown that
ˉ Aˉ1 , ..., Aˉk ) < 0, corresponding to (6.9), i.e. an LMI with a matrix
the LMI: Θ(A,
104
whose entries are
θ0 (P, Q1 , ..., Qk ) , AˉT P + P Aˉ +
k
X
Qi
i=1
θi (P ) , P Aˉi , i = 1, ..., k
(6.13)
possesses a solution in terms of some positive definite matrices P > 0 and Qi >
ˉ Aˉ1 , ..., Aˉk ) by
0, i = 1, ..., k. Now, pre-multiplying and post-multiplying this Θ( A,
a matrix diag{P −1 , ..., P −1 } and letting X = P −1 , Y = KX, Ui = P −1 Qi P −1 ,
Yi = Ki P −1 , simply yields the matrix of Theorem 6.3.1 which is negative definite
by assumption. Hence, if the original system satisfies the conditions of Theorem
6.3.1 then the closed loop system (6.9) is asymptotically stable with a Lyapunov
functional given by
Vcl (xt ) , x (t)X
T
−1
x(t) +
k Z
X
i=1
t
xT (s)X −1 Ui X −1 x(s)ds
(6.14)
t−τi
The main result of this section is now stated as follows.
Theorem 6.3.3. Suppose that system (6.1) is stabilizable in that the conditions
of Theorem 6.3.1 are satisfied. Under these conditions, for any choice of the receding horizon penalty matrices Q > 0 and R > 0, there exist positive definite
penalty matrices Fi > 0, i = 0, 1, ..., k, which render the resulting receding horizon
control law asymptotically stabilizing for system (6.1) regardless of the choice of
the receding horizon length T > 0.
The above result will be proved in terms of the construction procedure, Propositions, and Theorem below.
105
6.3.1
Construction procedure for the receding horizon terminal cost
penalties
ˉ Aˉ1 , ..., Aˉk ) with the matrices P and Qi , i = 1, ..., k choLet Θcl denote the Θ(A,
sen as in the Lyapunov function (6.14). Since this matrix is symmetric and positive
definite, let λmin (Θ∗ ) > 0 denote its minimal eigenvalue. Similarly, let λmax (Q) > 0
and λmax (R) > 0 denote the maximal eigenvalues of Q > 0 and R > 0 respectively
and Kmax be defined as the matrix norm Kmax , k diag{K0 , K1 , ..., Kn } k where
Ki , i = 0, 1, ..., k are stabilizing controller gains in (6.9). Let the receding horizon
terminal cost penalties be chosen as follows,
F0 , ρX −1
Fi , ρX −1 Ui X −1 , i = 1, ..., k
(6.15)
where ρ is any number such that
2
ρλmin (Θcl ) ≥ λmax (Q) + λmax (R)Kmax
(6.16)
Next, it will be shown that this choice is well justified.
Remark 6.3.2. The condition (6.16) should best be satisfied tightly. Otherwise,
although large values of ρ insure a stronger and more robust stabilization result,
that will take place at the expense of using larger control values. Thus a careful
trade-off design should be considered.
6.4
Stabilizing property of the receding horizon control law
As is now a standard approach in demonstrating the stabilizing property of
the receding horizon control law, the, so called, ”monotonicity property” for the
106
receding horizon optimal value function is shown first. To this end, let u∗[t,t+T ]
denote the optimal control for J(xt , u, t, t + T ) with the penalty matrices selected
as in (6.15). Also, let x∗[t,t+T ] be the optimal system trajectory as it corresponds
to u∗ . The solution to the optimal control problem (6.3) has been discussed in
[33], where it was shown that its form is that of a linear continuous operator
L : C([t, t + T ], Rn ) → C([t, t + T ], Rn ) in the argument xt , i.e.
u∗[t,t+T ] = L(xt )
(6.17)
It follows that L is also strongly differentiable with respect to its argument. Due
to time invariance of the system model, L is also time shift invariant so that
u∗[t+σ,t+σ+T ] and u∗[t,t+T ] are the same as feedback functions of xt if they are the
optimal controls for J(xt , u, t + σ, t + σ + T ) and J(xt , u, t, t + T ), respectively.
What is also implied is the fact that, in the absence of model system error, the
receding horizon control law and the ”open loop” control law coincide. The precise
meaning of these words will be clear when a concrete computational procedure for
u∗ is presented in the next section, albeit for a special case of a receding horizon length T < τ1 . The monotonicity property of the optimal value function
J(xt , u∗[t,t+T ] , t, t + T ) is stated as follows.
Proposition 6.4.1. Let the system (6.1) be stabilizable in the sense of Theorem
6.3.1 and let the open loop receding horizon cost be chosen according to (6.15).
Then, for any t > 0, T > 0 and for any xt ∈ C[t − τk , t],
DT+ J(xt , u∗[t,t+T ] , t, t + T ) ≤ 0
107
(6.18)
where the right-sided derivative is defined as
DT+ J(xt , u∗[t,t+T ] , t, t + T )
, lim sup σ −1 [J(xt , u∗[t,t+T +σ] , t, t + T + σ)
σ→0+
−J(xt , u∗[t,t+T ] , t, t + T )]
(6.19)
Remark 6.4.1. As the operator L : xt 7→ u∗[t,t+T ] is linear and bounded and hence
strongly differentiable, the optimal cost J(xt , u∗[t,t+T ] , t, t + T ) is also differentiable
with respect to the horizon length T . It follows that although the statement of the
above result is made with regard to a one sided derivative (as the computation is
more appealing when it refers to a right-sided derivative and the result is sufficient
for the subsequent stability study), the same holds for the left-sided derivative.
Proof. By definition of the cost function,
J(xt , u∗[t,t+T +σ] , t, t + T + σ)
Z t+T +σ
∗
=
[x∗T (s)Qx∗ (s) + u∗T
[t,t+T +σ] (s)Ru[t,t+T +σ] (s)]ds
t
∗T
∗
+x (t + T + σ)F0 x (t + T + σ) +
k Z
X
i=1
t+T +σ
x∗T (s)Fi x∗ (s)ds
(6.20)
t+T +σ−τi
where, x∗ has been written for x∗[t,t+T +σ] , to simplify notation. Consider the following sub-optimal control for J(xt , u, t, t + T + σ) on [t, t + T + σ],


 u∗
[t,t+T ] (s) s ∈ [t, t + T )
usub (s) ,

 ucl (s)
s ∈ [t + T, t + T + σ]
108
(6.21)
and the corresponding system trajectory xsub (s), s ∈ [t, t + T + σ], where ucl is the
stabilizing closed loop control law as in (6.9), which is ”activated” starting from
the system state xsub (s), s ∈ [t + T − τk , t + T ]. It follows that xsub (s) = x∗[t,t+T ] ,
for s ∈ [t, t + T ] and
J(xt , u∗[t,t+T +σ] , t, t + T + σ)
≤ J(xt , usub , t, t + T + σ)
= J(xt , u∗[t,t+T ] , t, t + T )
Z t+T +σ
+
[xTsub (s)Qxsub (s) + uTcl (s)Rucl (s)]ds
t+T
−xTsub (t
+ T )F0 xsub (t + T ) −
+xTsub (t + T +
k Z t+T +σ
X
i=1
t+T
t+T −τi
xTsub (s)Fi xsub (s)ds
σ)F0 xsub (t + T + σ)
+
i=1
k Z
X
xTsub (s)Fi xsub (s)ds
(6.22)
t+T +σ−τi
Rearranging the above and proceeding to the limit as σ → 0+ (which is justified
by virtue of Remark 6.4.1) , yields
DT+ J(xt , u∗[t,t+T ] , t, t + T )
≤ xTsub (t + T )Qxsub (t + T ) + uTcl (t + T )Rucl (t + T )
k Z t+T
X
+ T
+
+DT xsub (t + T )F0 xsub (t + T ) + DT
xTsub (s)Fi xsub (s)ds
i=1
t+T −τi
≤ λmax (Q) k xsub (t + T ) k2 +λmax (R) k usub (t + T ) k2 +ρDT+ Vcl (xsub (t + T ))
≤ [λmax (Q) + Kmax λmax (R)] k ηt+T k2 −ρλmin (Θcl ) k ηt+T k2
≤0
(6.23)
109
where η[t+T ] [xTsub (t + T ), xTsub (t + T − τ1 ), ..., xTsub (t + T − τk )]. Hence, (6.18) is valid
as required.
The proof of the stabilizing property of the receding horizon control now
hinges on the use of the optimal value function as the Lyapunov function for system with the receding horizon control law. It is first noted that the optimal value
function has the following properties.
Proposition 6.4.2. There exist continuous, nondecreasing functions u : R+ →
R+ , v : R+ → R+ with the properties that u(0) = v(0) = 0 and u(s) > 0, v(s) > 0
for s > 0, such that the optimal value function J(xt , u∗[t,t+T ] , t, t + T ) satisfies
u(k x(t) k) ≤ J(xt , u∗[t,t+T ] , t, t + T ) ≤ v(k x(t) k)
for all t ≥ 0, xt ∈ C([t − τk , t], Rn )
(6.24)
Additionally, if the receding horizon cost is chosen as in Section 6.3.1, then there
exists a continuous, nondecreasing function w : R+ → R+ , with the property that
w(s) > 0 for s > 0, such that the right-sided derivative of the optimal value
function along the system trajectory with the receding horizon control law satisfies
Dt+ J(xt , u∗[t,t+T ] , t, t + T ) ≤ −w(k x(t) k)
for all t > 0, xt ∈ C([t − τk , t], Rn )
(6.25)
Proof. First, we note that the optimal control operator L, as defined in (6.17), is
bounded. This follows from the fact that L is linear and continuous in xt , see [33].
110
Hence, there exists a constant k1 > 0 such that k L(xt ) kC ≤ k1 k xt kC for all
xt ∈ C([t − τ, t], Rn ). Next, a simple application of the Bellman Gronwall Lemma
yields the existence of positive constants a > 0, b > 0, see also [45] p.16, such that
the optimal trajectory is bounded by
k
x∗[t,t+T ] (t
bσ
+ σ) k≤ ae [k xt kC +
Z
t+T
k L(xt )(s) k ds]
t
≤ aebσ [1 + T k1 ] k xt kC
for all σ ∈ [0, T ]
(6.26)
regardless of t > 0 and xt ∈ C([t − τk , t], Rn ). Setting k2 aebT [1 + T k1 ],
J(xt , u∗[t,t+T ] , t, t + T )
≤ [T (λmax (Q) +
λmax (R))(k22
+
k12 )
+
k
X
i=1
λmax (Fi )] k xt k2C
(6.27)
for all t > 0, and all xt ∈ C[t−τk ,t] , as required. On the other hand,
λmin (F0 ) k x(t) k2 ≤ J(xt , u∗[t,t+T ] , t, t + T )
(6.28)
that delivers the desired function u.
The right-sided derivative of the optimal value function along the trajectory of the
receding horizon controlled system is defined by
Dt+ J(xt , u∗[t,t+T ] , t, t + T )
, lim sup σ −1 [J(xt+σ , u∗[t+σ,t+σ+T ] , t + σ, t + σ + T )
σ→0+
−J(xt , u∗[t,t+T ] , t, t + T )]
111
(6.29)
Now, in view of the assumptions made and the result of Proposition 6.4.1, there
exists a right-sided neighborhood of zero N (0) , {σ ∈ R | 0 ≤ σ < } such that
J(xt , u∗[t,t+T +σ] , t, t + T + σ) ≤ J(xt , u∗[t,t+T ] , t, t + T )
(6.30)
for all σ ∈ N (0), all t > 0, and all xt ∈ C([t − τk , t], Rn ). Hence, in particular,
for any θ ∈ N (0), employing Bellman’s Principle of Optimality, yields
J(xt , u∗[t,t+T ] , t, t + T )
Z t+θ
∗
∗
∗
[x∗T (s)Qx∗ (s) + u∗T
=
[t,t+T ] Ru[t,t+T ] ]ds + J(xt+θ , u[t+θ,t+T ] , t + θ, t + T )
t
Z t+θ
∗
≥
[x∗T (s)Qx∗ (s) + u∗T
[t,t+T ] Ru[t,t+T ] ]ds
t
+J(x∗t+θ , u∗[t+θ,t+T +θ] , t, t + T + θ)
(6.31)
where x∗ denotes the trajectory corresponding to the optimal control u∗[t,t+T ] , and
xθ denotes the trajectory corresponding to the optimal control u∗[t+θ,t+T +θ] . Rearranging the above and proceeding to the limit with θ → 0+ yields
Dt+ J(xt , u∗[t,t+T ] , t, t + T )
∗
≤ −[x∗T (t)Qx∗ (t) + u∗T
[t,t+T ] (t)Ru[t,t+T ] (t)]
≤ −xT (t)Qx(t)
(6.32)
which holds for all t > 0, all T > 0, and all xt ∈ C([t − τk , t], Rn ). It suffices to
define
w(k x(t) k) = xT (t)Qx(t)
112
(6.33)
The last proposition delivers immediately the desired stabilization result.
Theorem 6.4.1. Assume that system (6.1) has the stabilizability property specified
in Theorem 6.3.1 and that the open loop control cost employs the terminal penalty
matrices as specified in Section 6.3.1. Then, the receding horizon control law based
on this cost function is globally and uniformly stabilizing for system (6.1).
Proof. The proof is immediate in view of the result of Proposition 6.4.2 and follows
from a standard stability theorem for time delayed systems; see [45], p.132.
6.5
Computation of the receding horizon control law
For simplicity of exposition, the effective computation of the receding horizon
control law will be demonstrated only for the simplest case in which the receding
horizon is selected to be shorter than any of the system delays; i.e. when tf < τ1 .
Noting that, since tf < τi , i = 1, ..., τk , then, for all i = 1, ..., k,
Z
tf
T
x (s)Fi x(s)ds =
tf −τi
Z
t0
T
x (s)Fi x(s)ds +
tf −τi
Z
tf
xT (s)Fi x(s)ds
(6.34)
t0
where the first term is not influenced by the control for t > t0 , (is a constant
only dependent on the initial condition xt0 of the system ). Thus, for optimization
purposes, the original cost functional can equivalently be substituted by
ˉ t0 , u, t0 , tf )
J(x
Z tf
ˉ
,
[xT (s)Qx(s)
+ uT (s)Ru(s)]ds + xT (tf )F0 x(tf )
t0
113
(6.35)
ˉ , Q+ Pk Fi for which the optimal control is identical to that minimizing
with Q
i=1
J(xt0 , u, t0 , tf ). The necessary condition for optimality is obtained by furnishing
the Hamiltonian, which in this case is given by, see [26], p.289,
H(x(h), x(h − τ1 ), ..., x(h − τk ), u(h), p(h))
ˉ
+ uT (h)Ru(h) + pT (h)[Ax(h) +
= x (h)Qx(h)
T
k
X
i=1
Ai x(h − τi ) + Bu(h)]
(6.36)
The necessary conditions (Pontryagin’s maximum Principle) then require that the
optimal control maximizes the Hamiltonian at each h ∈ [t0 , tf ], so that H has a
stationary point at the optimum:
0=
∂H
= Ru∗T (h) + B T p∗ (h), h ∈ [t0 , tf ]
∂u
(6.37)
which delivers an expression for the optimal control in terms of the optimal co-state
variable:
u∗ (h) = −R−1 B T p∗ (h), h ∈ [t0 , tf ]
(6.38)
The optimal state and co-state variables must thus satisfy the Hamiltonian system






k
∗
∗
 x (h)  X  Ai  ∗
 ẋ (h) 
(6.39)

=H
+
 x (τ − τi ), h ∈ [t0 , tf ]

ṗ∗ (h)
p∗ (h)
0
i=1
with the Hamiltonian matrix H given by


−BR−1 B T 
 A
H,

ˉ −AT
−Q
114
(6.40)
The boundary conditions for (6.39) are then given by imposing the value of x(t0 )
and requesting that
p∗ (tf ) = F0 x∗ (tf )
(6.41)
see [54], p.200.
Remark 6.5.1. The above necessary conditions are also sufficient as the optimal
control problem at hand is convex.
It is convenient to write the matrix exponential of H in a partitioned form


 Φ11 (s) Φ12 (s) 
(6.42)
Φ(s) , eHs = 
,
Φ21 (s) Φ22 (s)
for all s ∈ [t0 , tf ]. Then,

 


∗
∗
 x (tf )   Φ11 (tf − h) Φ12 (tf − h)   x (h) 

=


∗
∗
p (tf )
Φ21 (tf − h) Φ22 (tf − h)
p (h)


Z
k
∗
t
−h−τ
i
X f
 Φ11 (tf − h − s − τi ) ∙ Ai x (h + s) 
+

 ds
Φ21 (tf − h − s − τi ) ∙ Ai x∗ (h + s)
i=1 −τi
(6.43)
for any h ∈ [t0 , tf ]. Making use of the boundary conditions yields
∗
∗
p (h) = W0 (tf − h)x (h) +
k Z
X
i=1
tf −h−τi
−τi
115
Wi (tf − h, s)x∗ (h + s)ds
(6.44)
where
W0 (tf − h) = [Φ22 (tf − h) − F0 Φ12 (tf − h)]−1 [F0 Φ11 (tf − h) − Φ21 (tf − h)] (6.45)
Wi (tf − h, s) = [Φ22 (tf − h) − F0 Φ12 (tf − h)]−1 [F0 Φ11 (tf − h − s − τi )
−Φ21 (tf − h − s − τi )]Ai , i = 1, ..., k
(6.46)
At this point it is useful to define
P (s) , Φ22 (s) − F0 Φ12 (s)
(6.47)
X(s) , F0 Φ11 (s) − Φ21 (s),
(6.48)
so that W0 (s) = P −1 (s)X(s), and to recall the following Lemma (see [19] p.156)
Lemma 6.5.1. The matrix function W0 (s), s ∈ [t0 , tf ] associated with the Hamiltonian system (6.40) and boundary condition (6.41) satisfies the following Riccati
equation
d
ˉ s ∈ [t0 , tf ) (6.49)
W0 (s) = −AT W0 (s) − W0 (s)A + W0 (s)BR−1 B T W0 (s) − Q,
dt
with boundary condition given by W0 (tf ) = F0 . Equivalently, the matrix functions
P (s), X(s), s ∈ [t0 , tf ] satisfy the linear system
Ẋ(s) −Ṗ (s)
=
X(s) −P (s)
with boundary conditions

 −A BR B

ˉ
Q
AT
X(tf ) = F0 ; P (tf ) = I
116
−1
T



(6.50)
(6.51)
Clearly, the receding horizon control is obtained by replacing h by t, t0 by t, and
tf by t + T . This yields a particularly convenient representation of the receding
horizon control law:
u∗ (t) = −R−1 B T P −1 (T )[X(T )x(t)
k Z T −τi
X
X(T − s − τi )Ai x(t + s)ds]
+
i=1
(6.52)
−τi
which can be easily computed in terms of the defining matrix functions X(s), P (s), s ∈
[0, T ], by solving the linear system (6.50) with boundary conditions (6.51).
Remark 6.5.2. It should be observed that the existence of a unique, positive definite solution to the Riccati equation (6.49), which is necessary to compute the
receding horizon gain W , is guaranteed if the pair [A, B] is controllable; see [19]
p.163 for this standard result.
Remark 6.5.3. The discrete-time RHC of time delayed systems will be a subject of
further study. Here we only make a few comments about its implementation issues.
First, the sampling of the system must be non-pathological, see [25]. Furthermore
as pointed out in [4], the selection of a numerical technique for integration of DDEs
heavily depends on the construction of the so called “densely-defined continuous
extensions” (the method of defining these continuous extensions can affect both
the accuracy and the stability of the numerical method ).
117
6.6
Sensitivity of the RHC law with respect to perturbations in the
delay values
Notwithstanding the fact that the stabilizability property of the system sub-
ject to design is of the delay independent type, any errors in the delay values used
in the implementation of the actual closed loop receding horizon strategy may lead
to severe instability. Assessment of the sensitivity of the closed loop receding horizon control to variations in system delay values is hence primordial as delay values
are almost never exactly known. Again, for brevity of exposition, such sensitivity
will be discussed for the simplest case, when T < τ1 .
To this end a result cited below and proved in [92] will be found useful.
Proposition 6.6.1. Let the operator S : Rk → C([t0 , tf ], Rn ) be defined as the
mapping [τ1 , ..., τk ] 7→ x where x is the trajectory of system (6.1) corresponding
to known fixed values of the initial condition xt0 , a fixed control function u, and
system delay parameters τ , [τ1 , ...τk ]. The Fréchet derivative Y ,
∂
S
∂τ
exists
for all τ ∈ Rk as a linear and bounded operator: Y : Rk → C([t0 , tf ], Rn ) and is
given by a matrix function Y (s; τ ), s ∈ [t0 , tf ] whose columns yi (s; τ ) ,
∂
S(τ ),
∂τ
i = 1, ..., k satisfy the following equation on the interval s ∈ [t0 , tf ]:
d
d
yi (h) = Ayi (h) + Σkj=1 Aj yi (h − τj ) + Ai x(h − τi )
dh
dh
yi (s) = 0 for s ∈ [−τk , 0]
(6.53)
where, as above, x(h), h ∈ [−τk , T ] is the solution of system (6.1) corresponding
to the nominal value of the delay parameter vector τ and the given functions xt0
and u.
118
Direct application of this result to the Hamiltonian system (6.39) delivers
the strong derivative of the extended optimal state [x∗ (h), p∗ (h)]T , h ∈ [t0 , tf ] to
variations in the delay vector τ via the solution [δx∗T (h), δp∗T (h)], h ∈ [t0 , tf ] of
the matrix differential equation:

 



k
d
∗
∗
 dt δx (h)
δx (h) X Ai  ∗

  δx (h − τi )
=H
+
d
∗
∗
δp
δp
(h)
(h)
0
i=1
dt


d
d
A1 dt x(h − τ1 ) ... Ak dt x(h − τk )
+

0
...
0
(6.54)
where τ ∈ [t0 , tf ].
It should be noted that, in the above, the last matrix is a function of the
system trajectory for times h < 0. The sensitivity of the receding horizon cost
with respect to variations in the delay vector is hence expressed as



ˉ
Z tf ∗
0
∂
Q
 x (s)
J(xt0 , u∗[t0 ,tf ] , t0 , tf ) =

 ds
δx∗T (s) δp∗T (s) 
∂τ
0
p∗ (s)
0 BR−1 B T
(6.55)
Despite its analytical form, evaluation of the above sensitivity is computational
demanding, but the formula itself is informative in that (6.54) and (6.55) allow to
demonstrate the existence of a constant γ > 0 such that
∂
J(xt0 , u∗[t0 ,tf ] , t0 , tf ) ≤ γ sup{kx(s)k2
∂τ
| s ∈ [−2τk , 0]}
(6.56)
This, in turn allows to prove the following conceptual result.
Theorem 6.6.1. Under the assumptions of Theorem 6.4.1 there exists a neighborhood of the nominal delay value parameter such that the receding horizon control
119
law based on this nominal delay is globally asymptotically stabilizing for any system
with delay vector in this neighborhood.
6.7
Numerical example
In this section, the efficiency of the resulting methodology is further demon-
strated thorough numerical examples.
Example 6.1:[70]
Consider a time delayed system in [70]:
d
x(t) = Ax(t) + A1 x(t − τ ) + Bu(t)
dt
(6.57)
with RH cost functional
J(xt , u, t, t + T ) =
Z
t+T
[xT (s)Qx(s) + uT (s)Ru(s)]ds + xT (T )F0 x(T )
t
+
Z
t+T
xT (s)F1 x(s)ds
(6.58)
t+T −τ
The system matrices are given by



 

0 
−1 1
1
0
A=
 , A1 = 
,B =  
3 2
3
−1 −0.5
The delay size of the system is τ = 1. It is noted that this system is open-loop
unstable. The weighting matrices are Q = I, R = 1. Terminal weighting matrices
F0 and F1 guaranteeing the closed-loop stability are obtained by solving the LMI
120
in Theorem 6.3.2:





 6.1356 21.8958 
1.1952 0.0932
F0 = 
 , F1 = 

21.8958 120.1524
0.0932 1.2657

0.2
x(s) = , −1 ≤ s ≤ 0
0.1
The receding horizon control law can be obtained by following the computational procedure proposed in Section 6.5. Figure 6–1 presents the trajectory result
after applying RHC to system (6.57) with the horizon length T = 1 and the initial
function ϕ1 (s) = 0.2, ϕ2 (s) = 0.1, −1 ≤ s ≤ 0. From this example, it is seen that
the proposed RHC stabilizes time delayed systems.
0.2
X1
X2
0.15
0.1
0.05
0
-0.05
-0.1
-0.15
0
1
2
3
4
5
6
7
8
9
10
Figure 6–1: State trajectories of the closed loop system in Example 6.1
Example 6.2:[48]
Consider the example in [48]. Time delayed system is described by:
d
x(t) = Ax(t) + A1 x(t − 1) + Bu(t)
dt
121
(6.59)
The system matrices are given by

 



0 
1
0.1
0.875 1.125
A=
,B =  
 , A1 = 
0
0 0.125
0.375 1.625


1.55 0 
The weighting matrices Q = 
, R = 0.4. Terminal weighting matrices
0 1.55
F0 and F1 guaranteeing the closed-loop stability are obtained by solving the LMI
in Theorem 6.3.2:




112.5 
 26
2.2645 0.4343 
F0 = 
 , F1 = 

112.5 168.06
0.4343 11.8513
Figure 6–2 presents the trajectory result after applying RHC to system (6.59)
with the horizon length T = 1 and the initial function ϕ1 (s) = −15, ϕ2 (s) = 7,
−1 ≤ s ≤ 0.
10
X1
X2
5
0
-5
-10
-15
-20
-25
-30
-35
-40
0
5
10
15
Figure 6–2: State trajectories of the closed loop system in Example 6.2
122
CHAPTER 7
Model Predictive Control of Nonlinear Time Delayed Systems with
On-Line Delay Identification[79]
In this chapter, a receding horizon approach with on-line delay identification
for nonlinear time delayed systems is introduced. The control law is obtained
by minimizing a finite horizon cost and its closed-loop stability is guaranteed by
satisfying an inequality condition on the terminal functional. A special class of
nonlinear time delayed systems is introduced for constructing a systematic method
to find a terminal weighting functional satisfying the proposed inequality condition.
An RHC approach with on-line delay identification is discussed. The closed-loop
stability of the proposed RHC is shown through simulation examples, and the
effectiveness of the RHC with on-line delay estimation is confirmed.
This chapter is organized as follows: the problem statement and notation are
presented in Section 7.1. The monotonicity of the optimal cost and an inequality
condition on the terminal weighting functional are stated in Section 7.2. In Section
7.3, the stability of the RHC is investigated. In Section 7.4, a systematic method
to find a terminal weighting functional for a special class of time delayed systems is
presented. The RHC with on-line delay identification is discussed in Section 7.5.
The efficiency of the resulting methodology is further demonstrated using some
numerical examples in Section 7.6.
123
7.1
Problem statement
Receding horizon control is based on optimal control. In order to obtain
a receding horizon control, we first focus on the optimal control problem listed
below. The class of nonlinear time delayed systems considered here is restricted
to systems whose models are given in terms of differential difference equations of
the form
d
x(t) = f (x(t), x(t − τ1 ), ..., x(t − τk ), u(t))
dt
(7.1)
where x(t) ∈ Rn is the state, u(t) ∈ Rm is a continuous and uniformly bounded
function that represents an exogenous input, and 0 < τ1 < τ2 < ... < τk are
the time delays. The function f (∙) is assumed to be a continuously differentiable
function of its arguments and the initial condition is stated as
x(s) = φ(s),
−τk ≤ s ≤ 0
(7.2)
The Lipschitz continuous cost function, to be minimized, is written in the
form of
J(xt0 , u, t0 , tf ) ,
Z
tf
[q(x(s)) + r(u(s))]ds + q0 (x(tf )) +
t0
k Z
X
i=1
tf
qi (x(s))ds (7.3)
tf −τi
where t0 > 0 is an initial time, tf is a final time, q and r are state and input
cost functions, and qi , i = 0, ..., k are functions needed in the terminal weighting
functional and xt denotes xt (θ) = x(t + θ), θ ∈ [−τk , 0]. Assume that αL (kxk) ≤
i
q(x) ≤ αH (kxk), βL (kuk) ≤ r(u) ≤ βH (kuk), and γLi kxk ≤ qi (x) ≤ γH
(kxk),
i = 0, ..., k, where αL , αH , βL , βH , γL , and γH are continuous, positive-definitive,
strictly increasing functions satisfying αL (0) = 0, αH (0) = 0, βL (0) = 0, βH (0) =
i
0, γLi (0) = 0, and γH
(0) = 0, i = 0, 1, ..., k.
124
Assume that the optimal control trajectory which minimizes J(xt0 , u, t0 , tf ) is
given by
u∗ (s) = u∗ (s; x0 , tf ),
t 0 ≤ s ≤ tf
(7.4)
In the following section, an inequality condition on the terminal weighting functional is presented, under which the monotonicity of the optimal cost is guaranteed.
7.2
Monotonicity of the optimal value function
Again, as a standard approach in showing the stabilizing property of the re-
ceding horizon control law, the ”monotonicity property” for the receding horizon
optimal value function is shown first. In this section, an inequality condition on
the terminal weighting functional under which the optimal value function has the
non-increasing monotonicity is presented. Before directly discussion the receding
horizon control, an open-loop optimal control problem with the cost functional in
(7.3) will be considered first.
The open-loop optimal control problem is to find an optimal control that
minimizes J(xt0 , u, t0 , tf ) in (7.3). Let us denote the value of the optimal cost by
J ∗ (xt0 , u∗ , t0 , tf ). The following theorem is a modification of a result found in the
work of Kwon et al. [65] and shows that the optimal cost has the monotonicity
property provided that an inequality condition on the terminal weighting functional is satisfied.
125
Theorem 7.2.1. If there exist positive-definitive functions q0 (∙), q1 (∙),..., qi (∙) in
(7.3) satisfying the following inequality for all xσ :
q(x(σ)) + r(u(σ)) + (
+
k
X
i=1
∂q0 T
) f (x(σ), x(σ − τ1 ), ..., x(σ − τk ), u(σ))
∂x
[qi (x(σ)) − qi (x(σ − τi ))] ≤ 0
for all u(σ) = k(xσ )
(7.5)
then the optimal cost J ∗ (xs , u∗ , s, σ) satisfies the following relation:
DT+ J(xs , u∗[s,σ] , s, σ) ≤ 0
(7.6)
where the right-sided Dini derivative is defined as
DT+ J(xs , u∗[s,σ] , s, σ)
, lim+ sup M
M→0
−1
J(xs , u∗[s,σ+M] , s, σ+ M) − J(xs , u∗[s,σ] , s, σ)
(7.7)
Proof. By definition of the cost function,
J(xs , u∗[s,σ+M] , s, σ+
M) ,
Z
σ+M
s
+
[q(x∗ (s)) + r(u∗ (s))]ds + q0 (x∗ (σ+ M))
k Z
X
i=1
σ+M
qi (x∗ (s))ds
(7.8)
σ+M−τi
where, x∗ stands for x∗[s,σ+M] , to simplify notation. Consider a sub-optimal control
for J(xs , u, s, σ+ M) on [s, σ+ M], obtained while employing the following suboptimal control.
usub (v) ,


 u∗ (v) v ∈ [s, σ]
[s,σ]

 ucl (v)
126
v ∈ [σ, σ+ M]
(7.9)
It is clear that the corresponding system trajectory xsub (v), v ∈ [s, σ+ M], satisfies
xsub (v) = x∗[s,σ] , for v ∈ [s, σ] and
J(xs , u∗[s,σ+M] , s, σ+ M)
≤ J(xs , usub , s, σ+ M)
Z σ+M
k Z
X
[q(xsub (v)) + r(u(v))]dv + q0 (xsub (σ+ M)) +
=
=
Z
s
σ
∗
∗
[q(x (v)) + r(u (v))]dv +
s
+q0 (xsub (σ+ M)) +
=
Z
k Z
X
i=1
σ
∗
s
σ+M
qi (xsub (v))ds
σ+M−τi
∗
i=1
σ
qi (x∗ (v))ds} +
σ−τi
+q0 (xsub (σ+ M)) +
k Z
X
i=1
J(xs , u∗[s,σ] , s, σ)
+
[q(xsub (v)) + r(u(v))]dv
σ
[q(x (v)) + r(u (v))]dv + {q0 (x (σ)) +
k Z
X
Z
qi (xsub (v))ds
σ+M−τi
σ+M
∗
−q0 (x∗ (σ)) −
=
Z
i=1
σ+M
Z
k Z
X
i=1
σ
qi (x∗ (v))ds
σ−τi
σ+M
[q(xsub (v)) + r(u(v))]dv
σ
σ+M
qi (xsub (v))ds
σ+M−τi
− q0 (xsub (σ)) −
σ+M
k Z
X
i=1
σ
qi (xsub (v))ds
σ−τi
[q(xsub (v)) + r(u(v))]dv
σ
+q0 (xsub (σ+ M)) +
k Z
X
i=1
σ+M
qi (xsub (v))ds
σ+M−τi
127
(7.10)
Rearranging the above and proceeding to the limit as σ → 0+, yields
DT+ J(xs , u∗[s,σ] , s, σ)
= lim+ sup M−1 [J(xs , u∗[s,σ+M] , s, σ+ M) − J(xs , u∗[s,σ] , s, σ)]
M→0
= lim+ sup M
−1
M→0
[
Z
σ+M
[q(xsub (v)) + r(u(v))]dv − q0 (xsub (σ)) −
σ
+q0 (xsub (σ+ M)) +
k Z σ+M
X
i=1
= q(x(σ)) + r(u(σ)) + (
+
k
X
i=1
≤0
k Z
X
i=1
σ
qi (xsub (v))ds
σ−τi
qi (xsub (v))ds]
σ+M−τi
∂q0 T
) f (x(σ), x(σ − τ1 ), ..., x(σ − τk ), u(σ))
∂x
[qi (x(σ)) − qi (x(σ − τi ))]
(7.11)
Hence, (7.6) is valid as required.
In the following theorem, it will be shown that if the monotonicity of the
optimal cost holds at one specific time, then it holds for all subsequent times.
Theorem 7.2.2. ([65]) If DT+ J(xs1 , u∗[s1 ,σ] , s1 , σ) ≤ 0 for some s1 , then
DT+ J(xs2 , u∗[s2 ,σ] , s2 , σ) ≤ 0 where s1 ≤ s2 ≤ σ.
Proof.
DT+ J(xs1 , u∗[s1 ,σ] , s1 , σ)
1
= lim+ sup {J ∗ (xs1 , s1 , σ+ M) − J ∗ (xs1 , s1 , σ)}
M→0
M Z
s2
1
[q(x1 (t)) + r(u1 (t)]dt + J ∗ (x1s1 , s1 , σ+ M)
= lim sup {
M→0+
M s1
Z s2
−
[q(x2 (t)) + r(u2 (t)]dt − J ∗ (x2s2 , s2 , σ)}
s1
128
where u1 (t) and u2 (t) are optimal controls to minimize J(x(s1 ), s1 , σ+ M) and
J(x(s1 ), s1 , σ). If u2 (t) is replaced by u1 (t) up to time s2 , then
DT+ J(xs1 , u∗[s1 ,σ] , s1 , σ)
1
≥ lim+ sup {J(x1s2 , u∗[s2 ,σ] , s2 , σ+ M) − J ∗ (x1s2 , u∗[s2 ,σ] , s2 , σ)}
M→0
M
= DT+ J(x1s 2 , u∗[s2 ,σ] , s2 , σ)
1
∗
so that DT+ J(xs1 , u∗[s1 ,σ] , s1 , σ) ≤ 0 implies D+
T J(xs 2 , u[s2 ,σ] , s2 , σ) ≤ 0
7.3
Stability of the RHC
The RHC is obtained by replacing t0 by t, tf by t + T , and xt0 by xt in (7.4)
for 0 < T < ∞, where T denotes the horizon length. Hence, the RHC is given by
u∗ (t) = u∗ (t; xt , t + T )
(7.12)
The stability of the RHC hinges on the use of the optimal value function as the
Lyapunov function for system with the receding horizon control law. It is first
noted that the optimal value function has the following properties.
Proposition 7.3.1. There exist continuous, nondecreasing functions ũ : R+ →
R+ , ṽ : R+ → R+ with the properties that ũ(0) = ṽ(0) = 0 and ũ(s) > 0, ṽ(s) > 0
for s > 0, such that the optimal value function J(xt , u∗[t,t+T ] , t, t + T ) satisfies
ũ(k x(t) k) ≤ J(xt , u∗[t,t+T ] , t, t + T ) ≤ ṽ(k x(t) k)
for all t ≥ 0, xt ∈ C([t − τk , t], Rn )
129
(7.13)
Additionally, there exists a continuous, nondecreasing function w : R+ → R+ , with
the property that w(s) > 0 for s > 0, such that the right-sided Dini derivative of
the optimal value function along the system trajectory with the receding horizon
control law satisfies
Dt+ J(xt , u∗[t,t+T ] , t, t + T ) ≤ −w(k x(t) k)
for all t > 0, xt ∈ C([t − τk , t], Rn )
(7.14)
Proof. First, we note that αL (kxk) ≤ q(x) ≤ αH (kxk), βL (kuk) ≤ r(u) ≤ βH (kuk),
i
and γLi kxk ≤ qi (x) ≤ γH
(kxk), i = 0, ..., k, where αL , αH , βL , βH , γL , and γH are
continuous, positive-definitive, strictly increasing function satisfying αL (0) = 0,
i
αH (0) = 0, βL (0) = 0, βH (0) = 0, γLi (0) = 0, and γH
(0) = 0, i = 0, 1, ..., k. Then
J(xt , u∗[t,t+T ] , t, t + T )
Z T
k Z
X
=
[q(x(s)) + r(u(s))]ds + q0 (x(T )) +
t
≤
i=1
Z
T
qi (x(s))ds
T −τi
T
[αH (kx(s)k) + βH (ku(s)k)]ds +
0
γH
(kx(t)k)
+
t
k Z
X
i=1
T
i
γH
(kx(s)k)ds (7.15)
T −τi
for all t > 0, and all xt ∈ C[t−τk ,t] , as required. On the other hand,
J(xt , u∗[t,t+T ] , t, t + T )
Z T
k Z
X
[q(x(s)) + r(u(s))]ds + q0 (x(T )) +
=
t
≥
Z
i=1
T
qi (x(s))ds
T −τi
T
[αL (kx(s)k) + βL (ku(s)k)]ds +
γL0 (kx(t)k)
t
+
k Z
X
i=1
that delivers the desired function ũ.
130
T
T −τi
γLi (kx(s)k)ds (7.16)
The right-sided Dini derivative of the optimal value function along the trajectory
of the receding horizon controlled system is defined by
Dt+ J(xt , u∗[t,t+T ] , t, t + T )
, lim sup σ −1 [J(xt+σ , u∗[t+σ,t+σ+T ] , t + σ, t + σ + T )
σ→0+
−J(xt , u∗[t,t+T ] , t, t + T )]
(7.17)
Now, in view of the assumptions made and the result of Theorem 7.2.1, there exists
a right-sided neighborhood of zero N (0) , {σ ∈ R | 0 ≤ σ < } such that
J(xt , u∗[t,t+T +σ] , t, t + T + σ) ≤ J(xt , u∗[t,t+T ] , t, t + T )
(7.18)
for all σ ∈ N (0), all t > 0, and all xt ∈ C([t − τk , t], Rn ). Hence, in particular,
for any θ ∈ N (0), employing Bellman’s Principle of Optimality, one obtains
J(xt , u∗[t,t+T ] , t, t + T )
Z t+θ
[q(x∗ (s)) + r(u∗[t,t+T ] (s)]ds + J(x∗t+θ , u∗[t+θ,t+T ] , t + θ, t + T )
=
t
Z t+θ
≥
[q(x∗ (s)) + r(u∗[t,t+T ] (s)]ds + J(x∗t+θ , u∗[t+θ,t+T +θ] , t, t + T + θ)
(7.19)
t
where x∗ denotes the trajectory corresponding to the optimal control u∗[t,t+T ] , and
xθ denotes the trajectory corresponding to the optimal control u∗[t+θ,t+T +θ] . Rearranging the above and proceeding to the limit with θ → 0+ yields
Dt+ J(xt , u∗[t,t+T ] , t, t + T )
≤ −[q(x∗ (t)) + r(u∗[t,t+T ] (t)]
≤ −q(x∗ (t))
≤ −αL (kx(t)k)
131
(7.20)
which holds for all t > 0, all T > 0, and all xt ∈ C([t − τk , t], Rn ). It suffices to
define
w(k x(t) k) = αL (kx(t)k)
(7.21)
which completes the proof.
The last proposition delivers immediately the desired stabilization result.
Theorem 7.3.1. Assume that system (7.1) has the property specified in Theorem
7.2.1. Then, the receding horizon control law based on this cost function is globally
and uniformly stabilizing for system (7.1).
Proof. The proof is immediate in view of the result of Proposition 7.3.1 and follows
from a standard stability theorem for time delayed systems; see [45], p.132.
7.4
Feasible Solution to a particular type of nonlinear time delayed
systems
In general, finding of the feasible qi (∙), i = 0, ..., k, in Theorem 7.2.1 for
nonlinear time delayed systems (7.1) is difficult. In this section, we introduce
a approach of Kwon et al. [74] for a particular type of nonlinear time delayed
systems. In this approach, the feasible qi (∙), i = 0, ..., k, satisfying the inequality
condition (7.5) can be easily obtained by solving an LMI problem.
The mentioned particular type of nonlinear time delayed is:
ẋ(t) = f (x(t), x(t − τ ), Bu(t))
= Ax(t) + Hp(x(t)) + A1 x(t − τ ) + H1 g(x(t − τ )) + Bu(t)
132
(7.22)
with initial condition:
x(s) = φ(s),
s ∈ [−τ, 0],
p(0) = 0,
g(0) = 0,
where H and H1 are constant matrices with appropriate dimensions and the functions f, p, g : Rn → Rn . There exist some constant matrices N , M , N1 , and M1
for the functions p and g of (7.22) such that the following inequalities:
kp(x(t)) − N x(t)k2 ≤ kM x(t)k2 ,
kg(x(t)) − N1 x(t)k2 ≤ kM1 x(t)k2
(7.23)
are valid. We assume that the cost penalties q(∙) and r(∙) have quadratic forms:
q(x(t)) = xT (t)Qx(t),
r(u(t)) = uT (t)Ru(t)
(7.24)
where Q and R are symmetric, positive definite matrices. Then the following
theorem for a terminal weighting functional provides a systematic method to obtain
a receding horizon control law which stabilizes the nonlinear time delayed system
(7.22):
Theorem 7.4.1. ([71, 74]) If there exist a symmetric, positive definite matrix
X > 0, as well as some matrices Y, Z, and scalars ε, δ such that

T
X11
(A1 + H1 N1 )Z X
X
Y


Z(A1 + H1 N1 )T
−Z
0
0
0



X
0
−Z
0
0



0
X
0
0 −Q−1



Y
0
0
0
−R−1



MX
0
0
0
0


0
M1 Z
0
0
133
0
XM T
0
0
0
0
−εI
0

0 

ZM1T 


0 


0 
 < 0 (7.25)

0 



0 

−δI
where
X11 , (AX + BY ) + (AX + BY )T + εHH T + δH1 H1T + HN X + XN T H T
then the inequality condition (7.5) is satisfied with the control
u(xt ) , Kx(t)
(7.26)
using q0 (x(t)) = xT (t)P x(t), q1 (x(t)) = xT (t)Sx(t), where P and S are symmetric
positive definite matrices, and P, S, and K can be obtained by letting
P , X −1 ,
S , Z −1 ,
K = Y X −1
Proof. Assume that q0 (x(t)) = xT (t)P x(t), q1 (x(t)) = xT (t)Sx(t), and u(xt ) =
Kx(t). Then the inequality condition (7.5) be re-written as
q(x(σ)) + r(u(xσ )) + (
∂q0 T
) f (x(σ), x(σ − τ ), u(xσ )) + q1 (x(σ)) − q1 (x(σ − τ ))
∂x
= xT (σ)Qx(σ) + (Kx(σ))T R(Kx(σ)) + 2xT (σ)P f (x(σ), x(σ − τ ), Kx(σ))
+xT (σ)Sx(σ) − xT (σ − τ )Sx(σ − τ )
= xT (σ)Qx(σ) + xT (σ)K T RKx(σ) + 2xT (σ)P [(A + BK)x(σ) + Hp(x(σ))
+A1 x(σ − τ )) + H1 g(x(σ − τ ))] + xT (σ)Sx(σ) − xT (σ − τ )Sx(σ − τ )
= xT (σ)[(A + BK)T P + P (A + BK) + Q + K T RK]x(σ)
+xT (σ)P A1 x(σ − τ ) + xT (σ − τ )AT1 P x(σ) + xT (σ)P Hp(x(σ))
+pT (x(σ))H T P x(σ) + xT (σ)P H1 g(x(σ − τ )) + g T (x(σ − τ ))H1T P x(σ)) (7.27)
134
Using the inequalities in (7.23) and the following general property
1
aT b + bT a ≤ θaT a + bT b,
θ
θ ∈ R+
(7.28)
yields
xT (σ)P Hp(x(σ)) + pT (x(σ))H T P x(σ)
= xT (σ)P H[p(x(σ)) − N x(σ)] + [p(x(σ) − N x(σ)]T H T P x(σ)
+xT (σ)P HN x(σ) + xT (σ))N T H T P x(σ)
≤ εxT (σ)P HH T P x(σ) + ε−1 xT (σ)M T M x(σ) + xT (σ)P HN x(σ)
+xT (σ)N T H T P x(σ)
(7.29)
Similarly, we obtain
xT (σ)P H1 g(x(σ − τ )) + g T (x(σ − τ ))H1T P x(σ)
≤ δxT (σ)P H1 H1T P x(σ) + δ −1 xT (σ − τ )M1T M1 x(σ − τ )
+xT (σ)P H1 N1 x(σ − τ ) + xT (σ − τ )N1T H1T P x(σ)
(7.30)
Using the relations of (7.29) and (7.30), Equation (7.27) can be re-written as
q(x(σ)) + r(k(xσ )) + (
∂q0 T
) f (x(σ), x(σ − τ ), u(xσ ))
∂x
+q1 (x(σ)) − q1 (x(σ − τ ))


P (A1 + H1 N1 )
W11

≤ ησT 
 ησ
T
(A1 + H1 N1 ) P
W22
= ησT Θησ
135
(7.31)


where Θ , 

W11
P (A1 + H1 N1 ) T
, ησ , [xT (σ), xT (σ − τ ) and
T
(A1 + H1 N1 ) P
W22
W11 , (A + BK)T P + P (A + BK) + S + Q + K T RK + P HN + N T H T P
+εP HH T P + δP H1 H1T P + ε−1 M T M,
W22 , −S + δ −1 M1T M1
If the matrix Θ is negative definite, then the cost monotonicity condition is satisfied. From Schur’s complement, the negative definiteness of Θ is equivalent to

KT
MT
0
0

P11
P (A1 + H1 N1 )
I
I
0 



T
(A1 + H1 N1 )T P
−S
0
0
0
0
M
1




−1


I
0
−S
0
0
0
0





0
0
0 
I
0
0
−Q−1

 < 0 (7.32)




−1
K
0
0
0
−R
0
0






M
0
0
0
0
−εI
0 



0
M1
0
0
−δI
Pre-multiplying and post-multiplying (7.32) by a matrix diag(P −1 , S −1 , I, I, I, I, I)
and letting X = P −1 , Y = KP −1 , Z = S −1 , yields the LMI in (7.25) and this
proves the theorem.
7.5
RHC with online delay identification
In receding horizon control, the process model plays a decisive role in the de-
sign of controller. The model must be capable of capturing the process dynamics
so as to precisely predict the future outputs. For uncertain time delayed systems,
robust control is so far employed to compensate for delay uncertainties, but the
136
associated robust designs tend to be very conservative; see, for example, [70]. To
reduce such conservativeness, in this chapter, an adaptive receding horizon control
technique for time delayed systems will be proposed.
A fundamental RHC problem with on-line identification is the adaptation of
the identified model to the changes in the actual process dynamics. The delay
identification approach proposed in previous chapter is employed in an on-line
fashion. The estimated predictive model is obtained by applying on-line delay
identification. This updated model is then used as a basis for obtaining the receding horizon control law. The block diagram of the MPC with on-line identification
is shown in Figure 7–1
r
Receding
Horizon
Controller
Process
u
Model
y
ŷ
+
-
e
ˆ
Delay
Identification
Figure 7–1: Block diagram of the RHC with on-line delay identification
137
The predictive model will change at each sampling instant time t. In the
adaptive system, the system model provides an estimate of the system output at
the current instant of time using the current estimate of the delay. In the predictive
controller, the estimated model is used to formulate the predictive model at instant
t and also to derive the control law. The following procedure may be used to
implement the RHC scheme with on-line delay identification.
Step 1) Initialization: Set up initial conditions of system, prediction horizon
N , measurement frequency, step size of numerical integration, step size α for
the delay identifier algorithm.
Step 2) Model prediction: Based on the known values up to instant time t
(past inputs and outputs), the delays in the model are updated by using the
delay identifier algorithm.
Step 3) Prediction correction: For an adopted horizon N , the predicted outputs y(t + k|t), k = 1, ..., N , and future control signals u(t + k|t), k =
0, ..., N − 1 are obtained optimizing the given performance criterion.
Step 4) RHC control: Only the control signal u(t|t) is applied to the process
and model over the interval [t, t + 1].
Step 5) Repeat: Go to Step 2.
7.6
Numerical example
In this section, numerical examples are presented to illustrate the efficiency
of the proposed method.
138
7.6.1
Examples of RHC for a special type of time delayed systems
The example considered here is adopted from [75]. A nonlinear time delayed
system is given by
ẋ(t) = x(t) sin(x(t)) + x(t − 1) + u(t)
(7.33)
with initial condition
−1 ≤ s ≤ 0
φ(s) = 10,
This system belongs to the class considered in Section 7.4. Note that, for this
system, we have
A = 0, Hf = 1, f (x) = x sin(x), Lf = 0, Mf = 1, A1 = 1, B = 1, g = 0.
Applying Theorem 7.4.1 with Q = 1 and R = 1, we obtain
P = 7.17,
S = 2.944,
K = −7.5865
Step size for numerical integration is taken to be 0.01 second. For receding horizon
implementation, state measurement is taken at the sample time of 0.05 second and
horizon length T is 1 second. Figure 7–2 compares the state trajectories for RHC
with those for a constant state feedback u(t) = Kx(t). With the above K value,
Figure 7–3 compares the control trajectories. Integrated costs are given as follows:
JRHC = 34.8201,
JKX = 39.8288
where JRHC is the cost for the RHC and JKX is the cost for a constant statefeedback controller. Note that JRHC is less than JKX by about 15%. This result
is obvious, since RHC has more degree of freedom than a constant state-feedback
139
in minimizing the cost. From this numerical example, it is seen that the proposed
RHC is stabilizing in closed loop.
Closed-loop state trajectory
10
RHC
u=Kx(t)
9
8
7
x(t)
6
5
4
3
2
1
0
0
0.5
1
1.5
2
time
2.5
3
3.5
4
Figure 7–2: State trajectories (RHC: solid line, u = Kx: dotted line)
Control history
0
RHC
u = Kx(t)
-10
-20
u(t)
-30
-40
-50
-60
-70
-80
0
0.5
1
1.5
2
time
2.5
3
3.5
4
Figure 7–3: Control trajectories (RHC: solid line, u = Kx: dotted line)
140
The next example considered here is acquired from [43, 74]. A nonlinear
second-order time delayed system is given by


 

  

x1 (t−τ )
√
 1+x21 (t−τ )  0
ẋ1 (t) 0 1 x1 (t)

=
 + 0.7 
 +   u(t)

ẋ2 (t)
x2 (t)
0 0
1
x2 (t − τ )
(7.34)
with initial condition


 100s + 3 
φ(s) = 
,
−200s + 1
−1 ≤ s ≤ 0
This system also
in Section
 7.4. Furthermore, note
 to the class considered 
 belongs
x1 (t)
√
0.5 0
 1+x21 (t) 
that g(x) = 
 and Mg = 0.5 0 to
, and we select Lg = 
0 1
x2 (t)
satisfy the condition kg(x) − Lg xk2 6 kMg xk2 , For this system, we assume
A1 = 0, f = 0, Hg = 0.7, Lf = 0, τ = 1.
Applying Theorem 7.4.1 with Q = I2×2 and R = 1, we obtain




3.3948 0.7360
18.3346 11.3716
P =
 , K = −11.3881 −10.2128
, S = 
0.7360 1.4674
11.3716 10.1328
Step size for numerical integration is taken to be 0.01 second. For receding horizon
implementation, state measurement is taken at the sample time of 0.05 second and
horizon length T is 1 second. Figure 7–4 presents the state trajectories for RHC
of nonlinear time delayed system. Figure 7–5 shows the control trajectories. The
proposed RHC law clearly stabilizes the system.
141
nonlinear delay system of Kwon-unpublished
8
x1
x2
6
4
x1 & x2
2
0
-2
-4
-6
0
2
4
6
time (day)
8
10
12
Figure 7–4: State trajectories of RHC
20
0
-20
control
-40
-60
-80
-100
-120
0
2
4
6
time
8
10
12
Figure 7–5: Control trajectory of RHC
7.6.2
RHC of time delayed systems with on-line delay identification
The example considered here is also taken from [75]. The nonlinear time
delayed system to be identified is given by
ẋ(t) = x(t) sin(x(t)) + x(t − τ ) + u(t)
142
(7.35)
with initial condition
φ(s) = 10,
−τ ≤ s ≤ 0
In this example, the actual delay is τ̂ = 1 and the initial guess delay is taken
to be τ = 0.1. In simulation, the step size function (α) is 0.1, prediction horizon is T = 0.1, and the step size for numerical integration is 0.01, while the state
measurement is taken with a sample rate of 0.01 seconds. Figure 7–6 shows a comparison of the state trajectories of the actual and estimated models. The RHC
closed-loop state trajectories of the actual and estimated models are presented in
Figure 7–7.
true
on-line approximation
11
10.8
x(t)
10.6
10.4
10.2
10
9.8
0
1
2
3
time
4
5
6
Figure 7–6: Comparison of actual and estimated models
143
10
true
on-line approximation
8
x(t)
6
4
2
0
-2
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
time
Figure 7–7: RHC closed-loop state trajectories of actual and estimated models
144
CHAPTER 8
Conclusions
The main results of this thesis are again brought together here. A brief summary of the research carried out and some conclusive remarks are provided in
Section 8.1. Future research avenues are proposed in Section 8.2.
8.1
Summary of research
Delay identification techniques and receding horizon control schemes have
been developed for the time delayed systems. The objective of this research was
to improve the robustness with respect to delay perturbations in receding horizon
control of time delayed systems. To fulfill this objective, delay identification approaches, the receding horizon control of time delayed systems with multiple delays
and an adaptive receding horizon control with on-line delay estimation have been
investigated.
Two algorithms for the identification of multiple time delays in linear time delayed systems were proposed in Chapter 4. The identification problem was posed
as a minimization problem in Hilbert spaces and necessitated the computation of
system sensitivity to delay perturbations. Delay identifiability conditions were derived in terms of the properties of the associated strong derivative of the adopted
cost function with respect to perturbation in the delays. Delay identifiability is
of a local type and was shown to be related to system controllability. The steepest descent and generalized Newton type algorithms were subsequently developed.
145
Convergence of the identifier algorithms was rigorously assessed. The identification approach developed in that chapter has several advantages:
• It is independent of system matrix identification, but may be used in
parallel with any such schemes.
• It is robust with respect to errors in the measured trajectory and exogenous input function – in the case when the system trajectory fails to
represent the adopted model, the algorithm still converges.
• The verification of the delay identifiability condition is relatively simple.
Delay identification for nonlinear time delayed systems was developed in
Chapter 5. A generalized Newton type algorithm for the identification of multiple
time delays was presented. Similarly to the previous chapter, the delay identification problem was first posed as a least squares optimization problem in a Hilbert
space. The cost function was defined as the square of the distance of the modelled
and the measured system trajectory. The gradient of the cost involved calculation
of the Fréchet derivative of the mapping of the delay parameter vector into a system trajectory, i.e. the sensitivity of the system’s state to the change in the delay
values. The identifier algorithm was shown to converge locally to the true value of
the delay parameter vector. The identification approach developed there is exact
under the obvious assumption of faithful measurements, and is also robust with
respect to errors in the measured trajectory and exogenous input function. The
method is helpful in reducing conservativeness of existing robust control designs
such as the one of [70].
146
In Chapter 6, we generalized and enhanced a previous result concerning receding horizon control for linear time delayed systems which accounted only for
the presence of a single delay in the system. Sufficient, computationally feasible,
conditions for the existence of a receding horizon control law were presented for a
general stabilizable system with multiple delays. A simple construction procedure
for the matrices representing the penalties for the terminal state in the open loop
optimal receding horizon cost function was delivered. It was shown that if the
system satisfies the stabilizability condition as stated, the receding horizon control
law is globally, uniformly, asymptotically stabilizing. Sensitivity of the system
with receding horizon controller with respect to uncertainties in the delay values
was analyzed and it is indicated that the closed loop system is also asymptotically
stable for small perturbations in the delay values. The contributions of the chapter
are:
• A particularly simple constructive procedure for the open loop cost penalties
on the terminal state was offered.
• A clear association was made between the stabilizability of the system and
the existence of stabilizing receding horizon control law.
• Stabilization result was proved rigorously and all results pertain to time
delayed systems with an arbitrary number of system delays.
• The sensitivity of the receding horizon control to perturbations in the delay
values was also discussed which provided additional insight into the receding
horizon control design problem.
Receding horizon control for nonlinear time delayed systems was investigated
in Chapter 7. The RH control law was obtained by minimizing a finite horizon cost
and closed-loop stability was guaranteed by satisfying an inequality condition on
147
the terminal functional. However, for general nonlinear time delayed systems, it is
difficult to find the feasible solution to satisfy the inequality condition. A special
class of nonlinear time delayed systems was therefore introduced for constructing a
systematic method to find a terminal weighting functional satisfying the proposed
inequality condition. An RHC approach with on-line delay identification was then
presented. Through simulation examples, the closed-loop stability of the proposed
RHC was confirmed and the effectiveness of the RHC with on-line delay estimation
was presented.
The efficiency and usefulness of each of the above mentioned contributions
was further assessed at the end of each chapter with examples of applications.
These include both linear and nonlinear time delayed systems.
8.2
Future research avenues
A few interesting avenues for future research are pointed out below.
8.2.1
State-dependent and time-varying delays and other paramters in
time delayed systems
In Chapter 4 and 5, the problems solved focus on the constant state delays.
This work can be extended to the case of state-dependent delays and the identification approach can be extended to be applicable to identification of other
parameters (e.g. initial condition and the parameters in the model equations) in
the system. Although some results along thus topic have been published, see e.g.
[95], they are not directly applicable in practical computational procedures. However, the proposed approaches in Chapter 4 and 5 have been shown to be fully
148
implementable and practical.
8.2.2
Identification of measurement delays and input delays
Measurement delays and input delays also exist in many engineering systems such as transportation, communication, process engineering and networked
control systems. Many results have been published about identifying the measurement delays or input delays, see [49, 109, 112, 128, 131], but none of them
investigates simultaneous identification of all three types of delays (i.e. statedelay, measurement-delay, and input-delay). Our algorithms may be extended to
this case.
8.2.3
Receding horizon control of general nonlinear time delayed systems
For the last few decades, the problems of optimal control for nonlinear time
delayed systems have been receiving constant attention. Many solution methods
based on different principles have been constructed, see [2, 10, 11, 12, 24, 29, 30,
46, 63, 75, 84]. Since receding horizon control scheme is based on the optimal control, it is important to derive yet more efficient general computational approaches
to the solution of open loop optimal control to time delayed systems which may
provide us some hints to implement receding horizon control for general nonlinear
time delayed systems.
149
8.2.4
Adaptive receding horizon control for time delayed systems
In Chapter 7, receding horizon control with on-line delay identification was
discussed. However, the convergence of the adaptive scheme still needs justification. It would also be useful to evaluate the robustness bonds with respect to
perturbations in the delay values. It follows from our examples that any attempt
to identify or approximate the real system delays, in an on-line mode, will radically
reduce the conservativeness in the design of model predictive control strategies for
uncertain time delayed systems.
150
References
[1] G. Abdallah, R. Dorato, J. Benitez-Read, and R. Byrne. Delayed positive feedback can stabilize oscillatory systems. American control conference,
pages 3106–3107, San Francisco 1993.
[2] J.K. Aggarwal. Computation of optimal control for time-delay systems.
IEEE Transactions on Automatic Control, pages 683–685, December 1970.
[3] A. Ailon and M.I. Gil. Stability analysis of a rigid robot with output-based
controller and time-delay. Systems and Control Letters, 40(1):31–35, 2000.
[4] C.T.H. Baker. Retarded differential equations. Journal of Computational
and Applied Mathematics, 125:309–335, 2000.
[5] C.T.H. Baker, G. Bocharov, and F.A. Rihan. A report on the use of delay
differential equations in numerical modelling in the biosciences. Technical
Report 343, Manchester centre for computational mathematics, the university of Manchester, 1999.
[6] H.T. Banks. Approximation of nonlinear functional differential equation control systems. Journal of Optimization Theory and Applications, 29(3):383–
408, 1979.
[7] H.T. Banks, J.A. Burns, and E.M. Cliff. Parameter estimation and identification for systems with delays. SIAM J. Control and Optimization, 19(6):791–
828, 1981.
[8] H.T. Banks and P.K. Daniel Lamm. Estimation of delays and other parameters in nonlinear functional differential equations. SIAM Journal on Control
and Optimization, 21:895–915, 1983.
[9] H.T. Banks and F. Kappel. Spline approximations for functional differential
equations. J. Differential Equations, 34:496–522, 1979.
[10] H.T. Banks and G.A. Kent. Control of functional differential equations of
retarded and neutral type to target sets in function space. SIAM on Journal
of Control, 10(4):567–593, 1972.
151
152
[11] M. Basin and J. Rodriguez-Gonzalez. Optimal control for linear systems
with multiple time delays in control input. IEEE Transactions on Automatic
Control, 51(1):91–97, January 2006.
[12] R.R. Bate. The optimal control of systems with transport lag. Advances in
Control Systems, 7:165–224, 1969.
[13] L. Belkoura and Y. Orlov. Identifiability analysis of linear-differential systems. IMA Journal of Mathematical Control and Information, 19:73–81,
2002.
[14] L. Belkoura, J.P. Richard, and M. Fliess. On-line identification of systems
with delayed inputs. 17th Symposium on Mathematical Theory of Networks
and Systems (MTNS), Kyoto, Japon , July 2006.
[15] E. Biberovic, A. Iftar, and H. Ozbay. A solution to the robust flow control problem for networks with multiple bottlenecks. IEEE Conference on
decision and control, pages 2303–2308, Orlando, FL, December 2001.
[16] G.A. Bocharov and F.A. Rihan. Numerical modelling in biosciences using
delay differential equations. Journal of Computational and Applied Mathematics, 125:183–199, 2000.
[17] M. Bohm, M.A. Demetriou, S. Reich, and L.G. Rosen. Model reference
adaptive control of distributed parameter systems. SIAM Journal on Control
and Optimization, 35:678–713, 1997.
[18] S. Boukas and Z. Liu. Deterministic and stochastic time delay systems.
Birkhauser, Boston, 2002.
[19] R.W. Brockett. Finite Dimensional Linear Systems. John Wiley Sons Inc.,
New York, 1970.
[20] J.A. Burns and E.M. Cliff. On the formulation of some distributed system
parameter identification problems. Proc. AIAA Symposium on Dynamics
and Control of Large Flexible Spacecraft, pages 87–105, 1977.
[21] L. Bushnell. Editorial: Networks and control. IEEE Control System Magazine, 21(1):22–99, 2001. special section on networks and control.
[22] E.F. Camacho and C. Bordons. Model Predictive Control. Springer, 2003.
[23] S. Chatterjee, K. Das, and J. Chattopadhyah. Time delay factor can be used
as a key factor for preventing the outbreak of a disease - results drawn from
153
a mathematical study of a one season eco-epidemiological model. Nonlinear
Analysis: Real World Application, 8:1472–1493, 2007.
[24] C.L. Chen, D.Y. Sun, and C.Y. Chang. Numerical solution of time-delayed
optimal control problems by iterative dynamic programming. Optimal Control Applications and Methods, 21:91–105, 2000.
[25] Tongwen Chen and Bruce Francis. Optimal sampled-data control systems.
Springer, London; New York, 1995.
[26] E.N. Chukwu. Stability and Time-Optimal Control of Hereditary Systems,
volume 188 of Mathematics in Science and Eng. Series. Academic Press,
San Diego, CA, U.S.A., 1992.
[27] E.N. Chukwu. Stability and time-optimal control of hereditary systems: with
application to the economic dynamics of the US. World Scientific, New Jersey,
2001.
[28] D.W. Clarke, C. Mohtadi, and S. Tuffs. Generalized predictive control: Part
1:the basic algorithm. Automatica, 23:137–148, 1987.
[29] M.C. Delfour. The linear quadratic optimal control problem for hereditary
differential systems: Theory and numerical solution. Applied Mathematics
Optimization, 3(2):101–162, 1977.
[30] V. Deshmukh, H. Ma, and E. Butcher. Optimal control of parameterically
excited linear delay differential systems via chebyshev polynomials. Optimal
Control Applications and Methods, 27:123–136, 2006.
[31] O. Diekmann, S. Gils, S. Verduyn Lunel, and H. Walther. Delay equations:
functional-, complex-, and nonlinear analysis. Springer-Verlag, New York,
1995.
[32] S.V. Drakunov, W. Perruquetti, J.P. Richard, and L. Belkoura. Delay identification in time-delay systems using variable structure observers. Annual
Reviews in Control, 30:143–158, 2006.
[33] D.H. Eller, J.K. Aggarwal, and H.T. Banks. Optimal control of linear timedelay systems. IEEE Trans. on Automatic Control, 14(6):678–687, 1969.
[34] L.E. Elsgolts and S.B. Norkin. Introduction to the Theory and Application
of Differential Equations with Deviating Arguments. Academic Press, New
York, 1973.
154
[35] A.T. Fuller. Optimal nonlinear control of systems with pure delay. International Journal of Control, 8(2):145–168, 1968.
[36] C.E. Garcia, D.M. Prett, and M. Morari. Model predictive control: Theory
and practice - a survey. Automatica, 25:335–348, 1989.
[37] A.N. Godunov. Peano’s theorem in banach spaces. Journal of Functional
Analysis and Its Applications, 9:53–55, 1975.
[38] O. Gomez, Y. Orlov, and I.V. Kolmanovsky. On-line identification of siso
linear time-invariant delay systems from output measurements. Automatica,
2007. available online August 2007, to be appeared.
[39] H. Gorecki, S. Fuksa, P. Grabowski, and A. Korytowski. Analysis and synthesis of time delay systems. John Wiley Sons, Warszawa, 1989.
[40] K. Gu, V. Kharitonov, and J. Chen.
Birkhauser, Boston, 2003.
Stability of time-delay systems.
[41] K. Gu and S.L. Niculescu. Survey on recent results in the stability and
control of time-delay systems. ASME Transaction on Journal of Dynamic
Systems, Measurement, and Control, 125:158–165, June 2003.
[42] W.M. Haddad, V. Chellaboina, and T. Rajpurohit. Dissipativity theory for
nonnegative and compartmental dynamical systems with time delay. IEEE
Transactions on Automatic Control, 49(5):747–751, May 2004.
[43] W.M. Haddad and Abdallah C.T. Kapila, V. Stability and control of timedelay systems, volume 228, chapter Stabilization of Linear and Nonlinear
Systems with Time Delay, pages 205–217. Springer Berlin/Hidelberg, 1998.
[44] J. Hale. Theory of functional differential equations. Springer-Verlag, New
York, 1977.
[45] J. Hale and S. Verduyn Lunel. Introduction to functional differential equations. Springer Verlag, New York, 1993.
[46] R.A. Hess. Optimal control approximations for time delay systems. AIAA
Journal, 10(11):1536–1538, 1972.
[47] H.Y. Hu and Z.H. Wang. Dynamics of Controlled Mechanical Systems with
Delayed Feedback. Springer, 2002.
155
[48] X.B. Hu and W.H. Chen. Model predictive control for constrained systems
with uncertain state-delays. International Journal of Robust and Nonlinear
Control, 14:1421–1432, 2004.
[49] H. Iemura, Z.J. Yang, S. Kanae, and K. Wada. Identification of continuoustime systems with unknown time delays by global nonlinear least-squares
method. IFAC workshop on Adaptation and Learning in Control and Signal
Processing, and IFAC workshop on Periodic Control Systems, pages 191–196,
2004. Yokohama, Japan.
[50] N. Jalili and N. Olgac. Optimum delayed feedback vibration absorber for
mdof mechanical structure. IEEE 37th conference on decision and control,
pages 4734–4739, Tampa, FL, December 1998.
[51] S.C. Jeong and P.G. Park. Constrained mpc algorithm for uncertain timevarying systems with state-delay. IEEE transactions on Automatic Control,
50(2):257–263, 2005.
[52] A. Kim, H.K. Kwon, and L. Volkanin. Time-Delay System Toolbox 0.6. Institute of Mathematics and Mechanics Russian Academy of Sciences and Engineering Research Center for Advanced Control and Instrumentation Seoul
National University, 2001.
[53] K.B. Kim. Implementation of stabilizing receding horizon controls for linear
time-varying systems. Automatica, 38:1705–1711, 2002.
[54] E.D. Kirk. Optimal Control Theory. Prentice-Hall Inc., Englewood Cliffs,
New Jersey, 1970.
[55] V. Kolmanovskii and A. Myshkis. Introduction to the theory and applications
of functional differential equations. Kluwer Academic, Dordrecht, 1999.
[56] V. Kolmanovskii and V. Nosov. Stability of functional differential equations.
Academic Press, London, 1986.
[57] V. Kolmanovskii and L. Shaikhet. Control systems with aftereffect. American
Mathematical Society, Rhode Island, 1996.
[58] V.B. Kolmanovskii and A. Myshkis. Applied theory of functional defferential
equations, volume 85 of Mathematics and Applications. Kluwer Academy,
Dordrecht, 1992.
156
[59] V.B. Kolmanovskii, S.I. Niculescu, and K. Gu. Delay effects on stability: A
survey. Proceedings of the 38th Conference on Decision and Control, pages
1993–1998, Phoenix, Arizona USA, December 1999.
[60] M.V. Kothare, V. Balakrishnan, and M. Morari. Robust constrained model
predictive control using linear matrix inequalities. Automatica, 32(10):1361–
1379, 1996.
[61] Z. Kowalczuk and P. Suchomski. Continuous-time generalised predictive control of delay systems. IEE proc. Contorl Theory and Application, 146(1):65–
75, 1999.
[62] Y. Kuang. Delay differential equations with applications in population dynamics. Academic Press, Boston, 1993.
[63] T. Kubo. Guaranteed lqr properties control of uncertain linear systems with
time delay of retarded type. Electrical Engineering in Japan, 152(1):43–49,
2005.
[64] W.H. Kwon and S. Han. Receding Horizon Control: Model Predictive Control
for State Models. Springer, 2005.
[65] W.H. Kwon, S.H. Han, and Y.S. Lee. Receding-horizon predictive control
for nonlinear time-delay systems with and without input constraints. IFAC
symposium on dynamics and control of process systems, pages 277–282, June
2001.
[66] W.H. Kwon, J.W. Kang, Y.S. Lee, and Y.S. Moon. A simple receding horizon
control for state delayed systems and its stability criterion. Journal of Process
Control, 13:539–551, 2003.
[67] W.H. Kwon, J.W. Kang, and Y.S. Moon. Receding horizon control for state
delayed systems with its stability. IEEE Proceedings of the 38th Conference
on Decision Control, pages 1555–1560, December 1999. Phoenix, Arizona
USA.
[68] W.H. Kwon and K.B. Kim. On stabilizing receding horizon controls for
linear continuous time-invariant systems. IEEE Trans. Automat. Contr.,
45(7):1329–1334, 2000.
[69] W.H. Kwon, Y.S. Lee, and S.H. Han. Receding horizon predictive control
for linear time-delay systems. SICE Annual Conference in Fukui, August
2003.
157
[70] W.H. Kwon, Y.S. Lee, and S.H. Han. General receding horizon control for
linear time-delay systems. Automatica, 40:1603–1611, 2004.
[71] W.H. Kwon, Y.S. Lee, S.H. Han, and C.K. Ahn. Receding horizon predictive control for nonlinear time-delay systems. Private communication with
authors.
[72] W.H. Kwon and Pearson. A modified quadratic cost problem and feedback
stabilization of a linear system. IEEE trans. Autom. Control, 22(5):838–842,
1977.
[73] W.H. Kwon and A. Pearson. Feedback stabilization of linear systems with
delayed control. IEEE Trans. Automatic Control, 25(2):266–269, 1980.
[74] W.H. Kwon, L.Y. Sam, and H.S. Hee. Receding horizon predictive control for non-linear time-delay systems. International Conference on Control,
Automation and Systems, pages 107–111, October 2001.
[75] A.Y. Lee. Numerical solution of time-delayed optimal control problems with
terminal inequality constraints. Optimal Control Applications Methods,
14:203–210, 1993.
[76] J.W. Lee, W.H. Kwon, and J.H. Choi. On stability of constrained receding
horizon control with finite terminal weighting matrix. Automatica, 34:1607–
1612, 1998.
[77] J.X. Li and Y. Kuang. Analysis of a model of the glucose-insulin regulatory
system with two delays. submitted to SIAM, 2007.
[78] J.X. Li, Y. Kuang, and C.C. Mason. Modeling the glucose-insulin regulatory
system and ultradian insulin secretory oscillations with two explicit time
delays. Journal of Theoretical Biology, 242:722–735, 2006.
[79] M.C. Lu and H. Michalska. Adaptive receding horizon control of time delayed
systems. Automatica, 2008. to be submitted.
[80] Y. Lu and Y. Arkun. Quasimin-max mpc algorithms for lpv systems. Automatica, 36:527–540, 2000.
[81] T. Luzyanina, D. Roose, and G. Bocharov. Numerical bifurcation analysis
of immunological models with time delays. Journal of Computational and
Applied Mathematics, 184:165–176, 2005.
[82] N. MacDonald. Biological delay systems: linear stability theory. Cambridge
University Press, Cambridge, 1989.
158
[83] J.M. Maciejowski. Predictive Control with Constraints. Prentice Hall, Harlow, England, 2002.
[84] M. Malek-Zavarei. Suboptimal control of systems with multiple delays. Journal of Optimization Theory and Applications, 30(4):621–633, April 1980.
[85] M. Malek-Zavarei and M. Jamshidi. Time-delay systems analysis, optimization and applications. North-Holland, Amsterdam, 1987.
[86] S. Mascolo. Congestion control in high speed communication networks using
the smith principle. Automatica, 35:1921–1935, 1999.
[87] D.Q. Mayne, J.B. Rawlings, C.V. Rao, and P.O.M. Scokaert. Constrained
model predictive control: statility and optimality. Automatica, 36(6):789–
814, 2000.
[88] H. Michalska. Receding horizon stabilizing control without terminal constraint on the state. Proceeding of the 35th Conference on Decision and
Control, pages 2608–2613, Kobe, Japan, December 1996.
[89] H. Michalska and M.C. Lu. Delay identification in linear differential difference systems. WSEAS International Conference on Automatic Control,
Modeling and Simulation, pages 297–304, March 2006.
[90] H. Michalska and M.C. Lu. Delay identification in nonlinear differential
difference systems. IEEE 45th Conference on Decision and Control, pages
2553–2558, December 2006. San Diego, U.S.A.
[91] H. Michalska and M.C. Lu. Design of the receding horizon stabilizing feedback control for systems with multiple time delays. WSEAS transactions on
Systems, 5(10):2277–2284, October 2006.
[92] H. Michalska and M.C. Lu. Gradient and generalized newton algorithms for
delay identification in linear hereditary systems. WSEAS transactions on
Systems, 5(5):905–912, May 2006.
[93] H. Michalska and M.C. Lu. Receding horizon control of differential difference
systems with multiple delay parameters. WSEAS International Conference
on Systems, pages 277–282, July 2006.
[94] H. Michalska and M.C. Lu. Delay identification in nonlinear time delayed
systems. IEEE Transactions on Automatic Control, 2008. Submitted.
159
[95] K..A. Murphy. Estimation of time- and state-dependent delays and other
parameters in functional differential equations. SIAM J. Appl. Math.,
50(4):972–1000, August 1990.
[96] S. Nakagiri and M. Yamamoto. Identifiability of linear retarded systems in
banach spaces. Funkcialaj Ekvacioj, 31:315–329, 1988.
[97] S. Nakagiri and M. Yamamoto. Unique identification of coefficient matrices,
time delays and initial functions of functional differential equations. Journal
of Mathematical Systems, Estimation, and Control, 5(3):323–344, 1995.
[98] S. Niculescu, editor. Delay effects on stability: a robust control approach.
Springer-Verlag, Berlin, 2001.
[99] G. Niemeyer and J.J. Slotine. Towards force-reflecting teleoperation over the
internet. IEEE Conference on robotics and automation, pages 1909–1915,
Leuven, Belgium, May 1998.
[100] J. Nilsson, B. Bernhardsson, and B. Wittenmark. Stochastic analysis and
control of real-time systems with random delays. Automatica, 34(1):57–64,
1998.
[101] J.E. Normey-Rico and E.F. Camacho. Multivariable generalised predictive
controller based on the smith predictor. IEE Proc. Control Theory and
Application, 5:538–546, 2000.
[102] J.E. Normey-Rico, J. Gomez-Ortega, and E.F. Camacho. A smith-predictorbased generalised predictive controller for mobile robot path-tracking. Control Engineering Practice, 7:729–740, 1999.
[103] S. Oblak and I. Skrjanc. Continuous-time nonlinear model predictive control of time-delayed wiener-type systems. Proceeding of the 25th IASTED
International Conference Modelling, Idenfication, and Control, pages 1–6,
February 2006. Lanzarote, Canary Islands, Spain.
[104] Y. Orlov, L. Belkoura, J.P. Richard, and M. Dambrine. Identifiability analysis of linear time-delay systems. IEEE Conference on Decision and Control,
December 2001.
[105] Y. Orlov, L. Belkoura, J.P. Richard, and M. Dambrine. On identifiability
of linear time-delay systems. IEEE Transactions on Automatic Control,
47(8):1319–1324, August 2002.
160
[106] Y. Orlov, L. Belkoura, J.P. Richard, and M. Dambrine. On-line parameter
identification of linear time-delay systems. IEEE Conference on Decision
and Control, December 2002.
[107] Y. Orlov, L. Belkoura, J.P. Richard, and M. Dambrine. Adaptive identification of linear time-delay systems. International Journal of Robust and
Nonlinear Control, 13:857–872, 2003.
[108] P.G. Park and S.C. Jeong. Constrained mpc for lpv systems with bounded
rates of parameter variations. Automatica, 40:865–872, 2004.
[109] A.P. Rao and L. Sivakumar. Identification of deterministic time-lag systems.
IEEE transaction on Automatic Control, pages 527–529, August 1976.
[110] J.B. Rawlings and K.R. Muske. The stability of constrained receding horizon
control. IEEE Trans. Autom. Control, 38(10):1512–1516, 1993.
[111] B.S. Razumikhin. Application of liapunov’s method to problems in the stability of systems with a delay. [Russian] Automat. i Telemeh., 21:740–749,
1960.
[112] X.M. Ren, A.B. Rad, P.T. Chan, and W.L. Lo. Online identification of
continuous-time systems with unknown time delay. IEEE transaction on
Automatic Control, 50(9):1418–1422, 2005.
[113] J. Richalet, A. Rault, J.L. Testud, and J. Papon. Model predictive heuristic
control: Applications to industrial processes. Automatica, 14:413–428, 1978.
[114] J.P. Richard. Time delay systems: an overview of some recent advances and
open problems. Automatica, 39(9):1667–1694, 2003.
[115] J.P. Richard, A. Goubet, P.A. Tchangani, and M. Dambrine. Nonlinear
delay systems: Tools for a quantitative approach to stabilization, volume 28
of Lecture notes in control and information sciences. Springer, London, 1997.
pp. 218-240.
[116] J.A. Rossiter. Model-Based Predictive Control: A Practical Approach. CRC
Press, Boca Raton, 2003.
[117] A.E. Rundell, R. DeCarlo, V. Balakrishnan, and H. HogenEsch. Systematic
method for determining intravenous drug treatment strategies aiding the
humoral immune response. IEEE Trans. on Biom. Engineering, 45(4):429–
439, 1998.
161
[118] S. Shakkottai, R.T. Srikant, and S. Meyn. Boundedness of utility function
based congestion controllers in the presence of delay. IEEE 40th Conference
on decision and control, pages 616–621, Orlando, FL, December 2001.
[119] C.L. Su and S.Q. Wang. Robust model predictive control for discrete uncertain nonlinear system with time-delay via fuzzy model. Journal of Zhejiang
University Science A, 7(10):1723–1732, 2006.
[120] K.K. Tan, Q.G. Wang, and T.H. Lee. Finite spectrum assignment control
of unstable time delay processes with relay tuning. Ind. Eng. Chem. Res.,
37:1351–1357, 1998.
[121] I.M. Tolic, E. Mosekilde, and J. Sturis. Modeling the insulin-glucose feedback
system: the significance of pulsatile insulin secretion. Journal of Theor. Biol.,
207:361–375, 2000.
[122] R. Underwood and D. Young. Null controllability of nonlinear functional
differential equations. SIAM Journal of Control and Optimization, 17:753–
772, 1979.
[123] S.M. Verduyn Lunel. Identification problems in functional differential equations. IEEE procedings on the 36th Conference on Decision Control, pages
4409–4413, December 1997.
[124] S.M. Verduyn Lunel. Parameter identifiability of differential delay equations.
Technical Report 1-26, Mathematisch Instituut, Universiteit Leiden, P.O.
Box 9512, 2300 RA, The Netherlands, 2000. www.math.leidenuniv.nl/report.
[125] M. Vidyasagar. Nonlinear Systems Analysis. Prentice Hall, Englewood Cliffs,
2nd edition, 1993.
[126] Q.G. Wang, M. Liu, and C.C. Hang. One-stage identification of continuous
time delay systems with unknown initial conditions and disturbance from
pulse tests. Modern Physics Letters B, 19:1695–1698, 2005.
[127] M. Yamamoto and S. Nakagiri. Identifiability of operators for evolution equations in banach spaces with an application to transport equations. Journal
Math. Appl., 186:161–181, 1994.
[128] Z.J. Yang, H. Iemura, S. Kanae, and K. Wada. Identification of continuoustime systems with multiple unknown time delays by global nonlinear leastsquares and instrumental variable methods. Automatica, 43:1257–1264, 2007.
162
[129] D.L. Yu and J.B. Gomm. Implementation of neural network predictive
control to a multivariable chemical reactor. Control Engineering Practice,
11:1315–1323, 2003.
[130] E. Zeidler. Nonlinear Functional Analysis and its Applications, volume Vol.
1, Fixed Point Theorems. Springer Verlag, New York, 1986.
[131] H. Zhang and L. Xie. Control and Estimation of Systems with Input-Output
Delays, volume 355. Springer-Verlag, Berlin Hiedelberg, 2007.
[132] F Zheng, Q.G. Wang, and T.H. Lee. Adaptive robust control of uncertain
time delay systems. Automatica, 41:1375–1383, 2005.
Download