Uploaded by Solmaz Babakan

# agrawal2004 ```Nonlinear Dynamics 38: 323–337, 2004.
C 2004 Kluwer Academic Publishers. Printed in the Netherlands.
A General Formulation and Solution Scheme for Fractional Optimal
Control Problems
OM PRAKASH AGRAWAL
Department of Mechanical Engineering, Southern Illinois University, Carbondale, IL 62901, U.S.A. (e-mail: [email protected];
fax: +1-618-453-7658)
(Received: 16 November 2003; accepted: 9 April 2004)
Abstract. Accurate modeling of many dynamic systems leads to a set of Fractional Differential Equations (FDEs). This paper
presents a general formulation and a solution scheme for a class of Fractional Optimal Control Problems (FOCPs) for those
systems. The fractional derivative is described in the Riemann–Liouville sense. The performance index of a FOCP is considered
as a function of both the state and the control variables, and the dynamic constraints are expressed by a set of FDEs. The Calculus of
Variations, the Lagrange multiplier, and the formula for fractional integration by parts are used to obtain Euler–Lagrange equations
for the FOCP. The formulation presented and the resulting equations are very similar to those that appear in the classical optimal
control theory. Thus, the present formulation essentially extends the classical control theory to fractional dynamic system. The
formulation is used to derive the control equations for a quadratic linear fractional control problem. An approach similar to a
variational virtual work coupled with the Lagrange multiplier technique is presented to find the approximate numerical solution
of the resulting equations. Numerical solutions for two fractional systems, a time-invariant and a time-varying, are presented to
demonstrate the feasibility of the method. It is shown that (1) the solutions converge as the number of approximating terms increase,
and (2) the solutions approach to classical solutions as the order of the fractional derivatives approach to 1. The formulation
presented is simple and can be extended to other FOCPs. It is hoped that the simplicity of this formulation will initiate a new
interest in the area of optimal control of fractional systems.
1. Introduction
This paper presents a new formulation and a new solution scheme for a class of fractional optimal
control problems for fractional dynamic systems. A Fractional Dynamic System (FDS) is a system
whose dynamics is described by Fractional Differential Equations (FDEs), and a Fractional Optimal
Control Problem (FOCP) is an optimal control problem for a FDS.
Considerable work has been done in the area of optimal control, and excellent textbooks exist on the
subject (see, e.g. [1–3]). Because of its importance, there are several journals that publish articles of
this field only. In addition, many other journals publish articles dealing with optimal control problems
regularly. However, optimal control problems considered in these books and journal articles largely
deal with systems dynamics whose behaviors are described by integral order differential equations.
Recent investigations in engineering, science, and other fields have demonstrated that the dynamics of
many systems are described more accurately using FDEs (see, e.g. [4–8] and the papers and references
therein). As Miller and Ross  point out, there is hardly a field of science or engineering that has
remained untouched by this field. However, very little work has been done in the area of FOCPs. As
the demand for efficient, accurate, and high precision systems grows, the demand for optimal control
theories, and the analytical and numerical schemes to solve the resulting equations will also grow.
The formulation, numerical scheme, and numerical results for some FOCPs presented in this paper are
attempts to fill this gap.
Fractional derivatives, or more precisely derivatives of arbitrary orders, have played a significant role
in engineering, science, and mathematics in recent years. Samko et al.  provide an encyclopedic
324 O. P. Agrawal
treatment of this subject. Additional background, survey, and application of this field in science, engineering, and mathematics can be found in [11, 9, 12–14, 6, 7].
Only limited work has been done in the area of fractional control, or more specifically, in the area
of Fractional Optimal Control (FOC). Early work in the area of FOC is documented in . Bode 
suggested the idea of using FOC to ensure amplifier stability, but he did not implement the idea. Manabe
 introduced fractional differentiation in feedback systems with saturation. Other works by Manabe
in this field appear in . More recently, Skaar et al.  presented a root locus method to assess the
stability of a fractionally controlled distributed viscoelastic structure whose constitutive law is modeled
using fractional-order derivatives. They find stability criteria for the system which is similar to those
for integral-order control system. Axtell and Bise  explored s-domain analysis of a fractional-order
system. Bagley and Calico  presented a state space formulation to predict the effects of feedback
to reduce motion of a fractionally damped system. These authors conclude that the fractional-order
time derivative control improves the performance of a system exhibiting strong hereditary behavior.
Makroglou et al.  presented a time domain method to assess the performance of the fractionally
damped rotating beam and compared the results with a Kelvin–Voight constitutive model.
Mbodje and Montseny  investigated the existence, uniqueness and asymptotic decay of the wave
equation with fractional derivative feedback, and showed that the method developed can easily be
adapted to a wide class of problems involving fractional derivative or integral operators of the time
variable. Machado [22, 23] developed algorithms for fractional-order discrete-time controllers that are
suited for z-transform analysis and discrete-time implementation and showed that classical P, I and
D actions are special cases of the new fractional control scheme. Podlubny, Dorcak, and Kostial 
compared the Letnikov–Riemann–Liouville and Caputo fractional derivatives from the point of view of
their applications in a generalization of the PID-controller called the P I λ D µ controllers. Oustraloup and
coworkers have applied fractional derivatives in system identification, robust control, and other fields
(see, e.g. [25, 26] and references therein). Podlubny  presents a complete chapter on fractional-order
systems and controllers and shows that PI-, PD-, and PID-controllers are particular cases of the P I λ D µ
controller. Hotzel  investigated the stability conditions for fractional delay systems and proved that
for stability, it is sufficient that the real parts of the transfer poles have a negative upper bound, and
it is necessary that the real part of every transfer pole is negative. Hartley and Lorenzo  present a
general fractional-order system and control theory that includes the time-varying initialization response.
They also present the stability properties of fractional-order systems, and a fractional-order vector space
representation, which is a generalization of the state space concept.
Note that calculus of variations play a significant role in the field of classical optimal control. Given
this fact, two recent papers by Riewe [29, 30] must be mentioned here. These papers are important
because they use fractional calculus of variations to develop Larangian, Euler–Lagrange equations, and
other concepts for mechanics of nonconservative systems. Agrawal  extends variational calculus to
fractional variational problems.
From the above and other literature in the field of fractional calculus it is clear that many of the
ideas of the ordinary calculus can be extended to fractional calculus with only minor changes. In
this paper, we present a new formulation and a new numerical scheme for a class of FOCPs, which
can be considered as a direct extension of formulations and numerical schemes for classical optimal
control problems. Our derivation uses Riewe’s results to develop the formulation. However, note that
Riewe develops his formulation for nonconservative Mechanics in terms of the left Riemann–Liouville
fractional derivatives. As a result, his formulation includes terms containing fractional power of (−1).
In contrast, we use both the left and the right fractional derivatives to remove this problem. We further
show that the numerical scheme to solve the fractional optimal control problem converges as the number
A general formulation and solution scheme for fractional optimal control problems 325
of approximating terms is increased, and the solutions approach to classical optimal control solutions
as the order of the fractional derivative approaches to 1.
In contrast to numerical schemes for FOC, many numerical schemes have been developed to solve
classical optimal control problems. Formulations and numerical schemes for classical optimal control
can be found, among others, in [2, 3, 32], and the references listed there. The numerical scheme
for fractional optimal control problems presented here follows the approach presented in Agrawal
 for optimal control problems containing derivatives of integral orders only. Note that optimal
control problems inherently lead to two-point boundary value problems. Lorenzo and Hartley  have
discussed the problem of finding the correct form of the initial conditions in a more general setting.
Deithelm, et al.  present a predictor-corrector type algorithm for fractional differential equations and
cite several references for the same. Following these developments, a shooting method type algorithm
may be possible for FOC. However, this will be considered in the future.
2. Euler–Lagrange equations for FOCPs
In this section, we first define a fractional derivative, and then formulate a FOCP and find the necessary
conditions for optimality.
Several definitions of a fractional derivative have been proposed. These definitions include Riemann–
Liouville, Grunwald–Letnikov, Weyl, Caputo, Marchaud, and Riesz fractional derivatives [11, 9, 6, 35].
Here, we formulate the problem in terms of the Left and the Right Riemann–Liouville fractional
derivatives, which are defined as ,
The Left Riemann-Liouville Fractional Derivative
α
a Dt
1
f (t) =
(n − α)
d
dt
n t
(t − τ )n−α−1 f (τ )dτ,
(1)
a
and
The Right Riemann-Liouville Fractional Derivative
α
t Db
1
d n b
f (t) =
(t − τ )n−α−1 f (τ )dτ,
−
(n − α)
dt
t
(2)
where α is the order of the derivative such that n − 1 ≤ α < n. These derivatives will be denoted as the
LRLFD and RRLFD, respectively. Note that in literature the Riemann–Liouville fractional derivative
generally means the LRLFD.
Using the above definitions, the FOCP under consideration can be defined as follows. Find the optimal
control u(t) for a FDS that minimizes the performance index
J (u) =
1
F(x, u, t)dt
(3)
0
subject to the system dynamic constraints
α
0 Dt x
= G(x, u, t),
(4)
326 O. P. Agrawal
and the initial condition
x(0) = x0 ,
(5)
where x(t) is the state variable, t represents the time, and F and G are two arbitrary functions. Note
that Equation (3) may also include some additional terms containing state variables at the end point.
This term in not considered here for simplicity. When α = 1, the above problem reduces to a standard
optimal control problem. Here the limits of integration have been taken as 0 and 1. Furthermore, we
consider 0 < α < 1. These are not the limitations of the approach. Any limits can be considered and
the derivative can be of any order. However, these conditions are considered for simplicity.
To find the optimal control we follow the traditional approach and define a modified performance
index as
1
J̄ (u) =
0
[F(x, u, t) + λ(G(x, u, t) − 0 Dtα x)]dt,
(6)
where λ is the Lagrange multiplier also known as a costate or an adjoint variable. Taking variation of
Equation (6), we obtain
1
δ J̄ (u) =
0
α ∂F
∂G
∂F
∂G
α
dt, (7)
δx +
δu + δλ(G(x, u, t) − 0 Dt x) + λ
δx +
δu − δ 0 Dt x
∂x
∂u
∂x
∂u
where δx, δu, and δλ, are the variation of x, u, and λ consistent with the specified terminal condition.
Riewe  has demonstrated that for ν > 0, the following identity is satisfied
b
a
d ν f (t)
g(t)dt = (−1)−ν
d(t − a)ν
b
f (t)
a
d ν g(t)
dt
d(t − b)ν
(8)
provided that d k f (t)/dt k = 0 or d k g(t)/dt k = 0 at t = a and t = b for k = 0 to n − 1, where
d ν f (t)
= a Dtν f (t)
d(t − a)ν
(9)
is the LRLFD of f (t) of order ν, and n is the smallest integer greater than ν. In terms of our notations,
Equation (8) is written as
b
a
(a Dtα
f (t))g(t)dt =
a
b
f (t)(t Dbα g(t))dt.
(10)
Equation (10) – called the formula for fractional integration by parts – can also be found in , which
lists additional requirements that functions f (t) and g(t) must satisfy. Using Equation (10), the last
integral in Equation (7) can be written as
0
1
λδ(0 Dtα x)dt =
0
1
δx(t D1α λ)dt
(11)
provided δx(0) = 0 or λ(0) = 0, and δx(1) = 0 or λ(1) = 0. Because x(0) is specified, we have
δx(0) = 0, and since x(1) is not specified, we require λ(1) to be zero. With these assumptions, the
A general formulation and solution scheme for fractional optimal control problems 327
identity in Equation (11) is satisfied. Note that we have assumed that the order of variation and the
fractional derivative can be interchanged.
Using Equations (7) and (11), we obtain
1
δ J̄ (u) =
0
δλ(G(x, u, t) − 0 Dtα x) + δx
∂F
∂F
∂G
∂G
+λ
− t D1α λ + δu
+λ
dt. (12)
∂x
∂x
∂u
∂u
Minimization of J̄ (u) (and hence minimization of J (u)) requires that the coefficients of δλ, δx, and δu
in Equation (12) be zero. This leads to
α
0 Dt x
= G(x, u, t),
∂F
∂G
α
+λ
,
t D1 λ =
∂x
∂x
∂F
∂G
+λ
= 0,
∂u
∂u
(13)
(14)
(15)
and
x(0) = x0
and λ(1) = 0.
(16)
Equations (13) to (15) represent the Euler-Lagrange equations for the FOCP. These equations give the
necessary conditions for the optimality of the FOCP considered here. They are very similar to the EulerLagrange equations for classical optimal control problems except that the resulting differential equations
contain the left and the right fractional derivatives. Furthermore, the derivation of these equations is
very similar to the derivation for an optimal control problem containing integral order derivatives.
Determination of the optimal control for the fractional system requires solution of Equations (13) to
(16).
Observe that Equation (13) contains LRLFD where as Equation (14) contains RRLFD. This clearly
indicates that the solution of optimal control problems requires knowledge of not only forward derivatives but also backward derivatives to account for end conditions. In classical optimal control theories,
this issue is either not discussed or they are not clearly stated. This is largely because the backward
derivative of order 1 turns out to be the negative of the forward derivative of order 1. For α = 1, 0 Dtα x
and t D1α λ are written as d x/dt and −dλ/dt, and Equations (13) and (14) reduce to
dx
= G(x, u, t),
dt
and
dλ ∂ F
∂G
+
+λ
= 0,
dt
∂x
∂x
which are the same as those obtained using classical optimal control theories [2, 3].
As a special case, assume that the performance index is an integral of quadratic forms in the state
and the control,
1
J (u) =
2
0
1
[q(t)x 2 (t) + r (t)u 2 ]dt,
(17)
328 O. P. Agrawal
where q(t) ≥ 0 and r (t) > 0, and the dynamics of the system is described by the following linear
fractional differential equation,
α
0 Dt x
= a(t)x + b(t)u.
(18)
This linear system for α = 1 has been studied extensively, and formulations and solution schemes for
this system are well documented in many textbooks and journal articles (see e.g. [2, 3)]. For 0 < α < 1,
the Euler-Lagrange Equations (13) to (15) and (18) lead to Equation (18) and
α
t D1 λ
= q(t)x + a(t)λ,
(19)
and
r (t)u + b(t)λ = 0.
(20)
From Equations (18) and (19), we get
α
0 Dt x
= a(t)x − r −1 (t)b2 (t)λ.
(21)
The state x(t) and the costate λ(t) are obtained by solving the fractional differential equations (19) and
(21) subject to the terminal conditions given by Equation (15). Once λ(t) is known, the control variable
u(t) can be obtained using Equation (20).
An approximate numerical method to find the state x(t) and the costate λ(t) is presented next.
3. Numerical Scheme to Solve the FOCPs
Solution of a FOCP associated with a linear FDS with quadratic performance index requires solution of
Equations (19) and (21) subject to the terminal conditions given by Equation (15). Equations (15), (19),
and (21) provide a set of two point Fractional Boundary Value Problems (FBVP). Several methods have
been presented to solve this class of problems for α = 1, for example see [2, 3, 32], and the references
therein. Many of the techniques cited in  can be extended to the fractional optimal control problem
formulated here. In this paper, we use the formulation of  for this task.
To find an approximate solution of Equations (15), (19), and (21), assume that δx and δλ are arbitrary
virtual variations of x and λ as defined above, except that they need not be consistent with the terminal
conditions. Using an approach analogous to a variational work approach and the Lagrange multiplier
technique, Equations (15), (19), and (21) can be restated as ,
a
b
[δx(0 Dtα x − a(t)x + r −1 (t)b2 (t)λ) + δλ(t D1α λ − q(t)x − a(t)λ)]dt
+δ[µ1 (x(0) − x0 )] + δ[µ2 λ(1)] = 0
(22)
where µ1 and µ2 are the Lagrange multipliers associated with the terminal conditions. The last two terms
in Equation (22) ensure that the terminal conditions are satisfied. Equation (22) can also be obtained by
eliminating u(t) from Equations (12) and (20), and adding the variations of the terminal conditions to the
A general formulation and solution scheme for fractional optimal control problems 329
resulting equation. A more general formulation can be obtained by multiplying weighting coefficients
to each variations. Details of this approach for α = 1 can be found in .
Equation (22) is the desired variational formulation for numerical solution of the FOCP. For numerical
solution, x, λ, δx, and δλ can be approximated using a set of basis functions, and Equation (22) can
be used to convert the two point FBVP given by Equations (15), (19) and (21) to a set of algebraic
equations. Solution of the algebraic equations will give the coefficients, which can then be substituted
back into the approximating functions to find the desired solution. Since δx and δλ are arbitrary, different
basis functions can be selected for x and λ, and δx, and δλ. Note that approximating functions for x
and λ need not satisfy the terminal conditions a priori. The last two terms in Equation (22) enforce the
terminal conditions.
In this paper, we approximate x, λ, δx, and δλ as
x(t) =
m
c j P j (t),
(23)
d j P j (t),
(24)
j=1
λ(t) =
m
j=1
δx(t) =
m
δc j P j (t),
(25)
j=1
and
δλ(t) =
m
δd j P j (t),
(26)
j=1
where P j (t), j = 1, . . . , m, are the shifted Legendre polynomials which satisfy the following orthonormality conditions,
1
P j (t)Pk (t)dt = δ jk =
0
0 j = k
1 j =k
,
(27)
c j and d j , j = 1, · · · , m, are polynomial coefficients, m is the number of polynomials selected, and
δc j and δd j are the variations of the coefficients c j and d j . Here δ jk is the Kroneker delta function. The
number of polynomials should be selected such that certain error is small in some sense. For example,
as the number of polynomials changes, the change in the vale of the performance index or the change
in the state variable measured in some norm space could be taken as a measure of the error. Note that it
is not necessary to select orthonomal polynomials as the basis functions. Orthonormal polynomials are
selected here because they lead to numerically stable sparse matrices, and in many cases the properties of
the polynomials can be used to generate the desired matrices efficiently. It is not necessary to select the
shifted Legendre orthonormal polynomials only. Other orthonormal polynomials can also be selected
for this task. However, this may require some modifications in the formulation so that one can take
advantage of the properties of the orthonormal polynomials. This issue for α = 1 is discussed in .
Substituting Equations (23) and (24) into Equation (17), the performance index may be written as
J=
m
m 1
[F0x ( j, k)c j ck + F0u ( j, k)d j dk ],
2 j=1 k=1
(28)
330 O. P. Agrawal
where F0x and F0u are defined as
1
F0x ( j, k) =
q(t)P j (t)Pk (t)dt
(29)
r (t)P j (t)Pk (t)dt.
(30)
0
and
1
F0u ( j, k) =
0
Substituting Equations (23) to (27) into Equation (22), and setting the coefficients of δµ1 , δµ2 , δc j and
δd j , j = 1, . . . , m to zero, we obtain
m
[F1 ( j, k) − F2 ( j, k)]ck +
k=1
−
F3 ( j, k)dk + P j (0)µ1 = 0,
j = 1, . . . , m,
(31)
k=1
m
F0x ( j, k)ck +
k=1
m
m
m
[F4 ( j, k) − F2 ( j, k)]dk + P j (1)µ2 = 0,
j = 1, . . . , m,
(32)
k=1
Pk (0)ck = x0 ,
(33)
k=1
and
m
Pk (1)dk = 0,
(34)
k=1
where F1 ( j, k) through F4 ( j, k) are defined as
1
F1 ( j, k) =
0
0
P j (t)(0 Dtα Pk (t))dt,
(35)
a(t)P j (t)Pk (t)dt,
(36)
r −1 (t)b2 (t)P j (t)Pk (t)dt,
(37)
1
F2 ( j, k) =
1
F3 ( j, k) =
0
and
1
F4 ( j, k) =
0
P j (t)(t D1α Pk (t))dt.
(38)
Note that Equations (35) and (38) contain the fractional derivatives of the basis functions. For Legendre functions, these derivatives could be obtained in close form. For other basis functions, numerical
integration may be necessary.
Equations (31) to (34) provide a set of (2m + 2) linear equations in (2m + 2) unknowns which can
be solved using a standard subroutine. Once the unknowns c j and d j , j = 1, . . . , m are known, the
state and the control variables are obtained using Equations (20), (23), and (24). An approach similar to
A general formulation and solution scheme for fractional optimal control problems 331
the one presented here has been used in conjunction with Hamilton’s law of varying action to find the
response of a dynamic system whose dynamics is described using integral derivative terms only .
Thus, the present numerical scheme can be extended to fractional differential equations. This it will be
considered in the future.
Applications of the formulations presented in Sections 2 and 3 are presented next.
4. Numerical Examples
To demonstrate the applicability of the formulation, and to validate the numerical scheme, we derive
the differential equations and present numerical results for two FOCPs, one time invariant and the other
time varying. We also demonstrate that the solutions converge as the number of basis polynomials
selected is increased, and the solutions for the fractional control problems approach to the solutions for
standard control problems as the order of the fractional derivative (i.e. α) approaches to 1.
4.1. TIME INVARIANT FOCP
As a first example, consider the following time invariant FOCP: Find the control u(t) which minimizes
the quadratic performance index
1
J (u) =
2
1
[x 2 (t) + u 2 ]dt
(39)
0
subject to the system dynamics
α
0 Dt x
= −x + u.
(40)
and the initial condition
x(0) = 1.
(41)
Note that in this example,
q(t) = r (t) = −a(t) = b(t) = x0 = 1,
(42)
and Equations (19) and (20) are given as
α
t D1 λ
= x − λ,
(43)
and
u + λ = 0,
(44)
and thus the optimal control function u(t) is negative of the costate variable λ(t). Analytical and
numerical results for this example for α = 1 can be found in [2, 3, 32] and references therein. Substituting
Equation (42) into Equations (29), (30), (36), and (37), and using Equation (27), we obtain
F0x ( j, k) = F0u ( j, k) = −F2 ( j, k) = F3 ( j, k) = δ jk .
(45)
332 O. P. Agrawal
Figure 1. Convergence of the state variable for the time-invariant system for α = 3/4 (: m = 4; ×: m = 6; o: m = 8; +:
m = 10).
Figure 2. Convergence of the control variable for the time-invariant system for α = 3/4 (: m = 4; ×: m = 6; o: m = 8; +:
m = 10).
The problem is solved for different values of m and α. Figures 1 and 2 show the state and the control
variables, respectively, as a function of time for α = 3/4 for different values of m. From these figures it is
clear that both the state and the control variables converge as the number of approximating polynomials
is increased. Figures 3 and 4 show the state and the control variables, respectively, as a function of time
for m = 10 for different values of α. These figures show that as α approaches close to 1, the numerical
solutions for both the state and the control variables approach to the analytical solutions for α = 1 as
expected. In Figures 3 and 4, only the numerical results for α = 1 are presented. This is because, for
α = 1 the analytical and the numerical results overlap. Furthermore, for α = 1, comparison of analytical
and numerical results appear in . Note that in these figures amplitudes of both x(t) and u(t) decrease
as α is decreased. For α = 0, Equation (40) essentially represents a linear algebraic equation, and in
that case, we obtain the trivial optimal solution as x(t) = u(t) = 0 for t > 0.
A general formulation and solution scheme for fractional optimal control problems 333
Figure 3. State variable as a function of time for fractional derivative models of different order for the time-invariant system (:
α=1/2; ×: α = 3/4; o: α = 7/8; +: α = 15/16; −: α = 1).
Figure 4. Control variable as a function of time for fractional derivative models of different order for the time-invariant system
(: α = 1/2; ×: α = 3/4; o: α = 7/8; +: α = 15/16; −: α = 1).
4.2. TIME VARYING FOCP
As a second example consider a linear time varying system with the same performance index and the
same initial condition as those considered in Section 4.1, except in this example, the system is subjected
to the following dynamic constraint,
α
0 Dt x
= t x + u.
(46)
334 O. P. Agrawal
Figure 5. Convergence of the state variable for the time-varying system for α = 3/4 (: m = 4; ×: m = 6; o: m = 8; +:
m = 10).
For this case,
q(t) = r (t) = b(t) = x0 = 1,
α
t D1 λ
= x + tλ,
a(t) = t,
(47)
(48)
and
u + λ = 0.
(49)
This problem for α = 1 has been considered by several investigators in the past (see e.g.  and the
references therein).
For this example also F0x ( j, k), F0u ( j, k), and F3 ( j, k) are given by Equation (45), and F2 ( j, k) is
obtained by replacing a(t) in Equation (36) by t. Using the properties of the Legendre polynomials it
can be shown that for this example F2 ( j, k) leads to a tri-diagonal matrix. Like the previous problem,
this problem also is solved for different values of m and α. Figures 5 and 6 show the state and the
control variables, respectively, as a function of time for α = 3/4 for different values of m. Figures 7
and 8 show the state and the control variables, respectively, as a function of time for m = 10 for
different values of α. From these figures we see that in this example also both the state and the control
variables converge as the number of approximating polynomials is increased (see Figures 5 and 6).
It is clear from Figures 3, 4, 7, and 8 that as α decreases the solutions for x(t) and u(t) oscillate more.
As a result for smaller α more number of polynomials are needed for the convergence of the solutions.
As a final remark, we note that very little progress has been made in the field of FOCP. This is largely
due to the fact that the underlying mathematics for fractional derivatives was not well developed. Recent
development in the field of fractional derivatives has eliminated this barrier. From the formulation and
the numerical examples presented above, it is clear that many of the concepts of classical control
theory can be directly extended to FOCPs. Although only one class of FOCPs was considered here, the
A general formulation and solution scheme for fractional optimal control problems 335
Figure 6. Convergence of the control variable for the time- varying system for α = 3/4 (: m = 4; ×: m = 6; o: m = 8; +:
m = 10).
Figure 7. State variable as a function of time for fractional derivative models of different order for the time-varying system (:
α = 1/2; ×: α = 3/4; o: α = 7/8; +: α = 15/16; −: α = 1).
formulation can be extended to many other FOCPs. It is hoped that this observation will initiate some
interest in the areas of fractional variational calculus and fractional optimal control.
5. Conclusions
A general formulation has been presented for a class of fractional optimal control problems. The
formulation utilized the calculus of variations, the Lagrange multiplier technique, and the formula for
fractional integration by parts to obtain the Euler–Lagrange equations for the fractional optimal control
problems. The formulation presented and the resulting equations are very similar to those for classical
336 O. P. Agrawal
Figure 8. Control variable as a function of time for fractional derivative models of different order for the time-varying system
(: α = 1/2; ×: α = 3/4; o: α = 7/8; +: α = 15/16; −: α = 1).
optimal control problems. The formulation is specialized for a system with quadratic performance index
subject to a fractional system dynamic constraint. Two numerical examples, one time invariant and the
other time varying are presented to show the applications of the formulations. Numerical results show
that the approximate solutions converge as the number of approximating polynomials increase, and as
α approaches close to 1, the numerical solutions for both the state and the control variables approach to
the analytical solutions for α = 1. It is hoped that the simplicity of the formulation and the numerical
scheme presented here will initiate new research in the areas of fractional variational calculus and
fractional optimal control.
Acknowledgements
The author would like to thank the reviewers for their suggestions to improve the quality of the paper.
References
1. Hestenes, M. R., Calculus of Variations and Optimal Control Theory, Wiley, New York, 1966.
2. Bryson, Jr. A. E. and Ho, Y. C., Applied Optimal Control: Optimization, Estimation, and Control, Blaisdell, Waltham,
Massachusetts, 1975.
3. Sage, A. P. and White, III, C. C., Optimum Systems Control, Prentice-Hall, Englewood Cliffs, New Jersey, 1977.
4. Bagley, R. L. and Calico, R. A., ‘Fractional order state equations for the control of viscoelastically damped structures’,
Journal of Guidance, Control, and Dynamics 14, 1991, 304–311.
5. Carpinteri, A. and Mainardi, F., Fractals and Fractional Calculus in Continuum Mechanics, Springer-Verlag, Vienna, 1997.
6. Podlubny, I., Fractional Differential Equations, Academic Press, New York, 1999.
7. Hilfer, R., Applications of Fractional Calculus in Physics, World Scientific, River Edge, New Jersey, 2000.
8. Machado, J. A. T. (guest editor), ‘Special issue on fractional calculus and applications’, Nonlinear Dynamics 29, 2002,
1–386.
9. Miller, K. S. and Ross, B., An Introduction to the Fractional Calculus and Fractional Differential Equations, Wiley, New
York, 1993.
10. Samko, S. G., Kilbas, A. A., and Marichev, O. I., Fractional Integrals and Derivatives – Theory and Applications, Gordon
and Breach, Longhorne, Pennsylvania, 1993.
11. Oldham, K. B. and Spanier, J., The Fractional Calculus, Academic Press, New York, 1974.
A general formulation and solution scheme for fractional optimal control problems 337
12. Gorenflo, R. and Mainardi, F., ‘Fractional calculus: Integral and differential equations of fractional order’, in Fractals and
Fractional Calculus in Continuum Mechanics, A. Carpinteri and F. Mainardi (eds), Springer-Verlag, Vienna, 1997, pp. 291–
348.
13. Mainardi, F., ‘Fractional calculus: Some basic problems in continuum and statistical mechanics’, in Fractals and Fractional
Calculus in Continuum Mechanics, A. Carpinteri and F. Mainardi (eds), Springer-Verlag, Vienna, 1997, pp. 291–348.
14. Rossikhin, Y. A. and Shitikova, M. V., ‘Applications of fractional calculus to dynamic problems of linear and nonlinear
hereditary mechanics of solids’, Applied Mechanics Reviews 50, 1997, 15–67.
15. Manabe, S., ‘Early development of fractional order control’, DETC2003/VIB-48370, in Proceedings of DETC’03, ASME
2003 Design Engineering Technical Conference, Chicago, Illinois, September 2–6, 2003.
16. Bode, H. W., Network Analysis and Feedback Amplifier Design, Van Nostrand, New York, 1945.
17. Manabe, S., ‘The non-integer integral and its application to control’, Japanese Institute of Electrical Engineers 80, 1960,
589–597.
18. Skaar, S. B., Michel, A. N., and Miller, R. K., ‘Stability of viscoelastic control systems’, IEEE Transactions on Automatic
Control 33, 1988, 348–357.
19. Axtell, M. and Bise, M. E., ‘Fractional calculus applications in control systems’, IEEE Proceedings of the National Aerospace
and Electronics Conference, Dayton, OH, USA, May 21–25, 1990, pp. 563–566.
20. Makroglou, A., Miller, R. K., and Skaar, S., Computational results for a feedback control for a rotating viscoelastic beam’,
Journal of Guidance, Control, and Dynamics 17, 1994, 84–90.
21. Mbodje, B. and Montseny, G., ‘Boundary fractional derivative control of the wave equation’, IEEE Transactions on
Automatic Control 40, 1995, 378–382.
22. Machado, J. A. T., ‘Analysis and design of fractional-order digital control systems’, Systems Analysis Modelling Simulation
27, 1997, 107–122.
23. Machado, J. A. T., ‘Fractional-order derivative approximations in discrete-time control systems’, Systems Analysis
Modelling Simulation 34, 1999, 419–434.
24. Podlubny, I., Dorcak, L., and Kostial, I., ‘On fractional derivatives, fractional-order dynamic systems and P I λ D µ -controllers’,
in Proceedings of the 1997 36th IEEE Conference on Decision and Control, Part 5, San Diego, California, December 10–12,
1997, pp. 4985–4990.
25. Oustaloup, A., Levron, F., Mathieu, B., and Nanot, F. M., ‘Frequency-band complex noninteger differentiator:
characterization and synthesis’, IEEE Transactions on Circuits and Systems – Fundamental Theory and Applications 40,
2000, 25–39.
26. Sabatier, J., Oustaloup, A., Iturricha, A. G., and Lanusse, P., ‘CRONE control: principles and extension to time-variant plants
with asymptotically constant coefficients’, Nonlinear Dynamics 29, 2002, 363–385.
27. Hotzel, R., ‘Some stability conditions for fractional delay systems’, Journal of Mathematical Systems, Estimation, and
Control 8, 1998, 499–502.
28. Hartley, T. and Lorenzo, C. F., ‘Dynamics and control of initialized fractional-order systems’, Nonlinear Dynamics 29,
2002, 201–233.
29. Riewe, F., ‘Nonconservative Lagrangian and Hamiltonian mechanics’, Physical Review E 53, 1996, 1890–1899.
30. Riewe, F., ‘Mechanics with fractional derivatives’, Physical Review E 55, 1997, 3582–3592.
31. Agrawal, O. P., ‘Formulation of Euler-Lagrange equations for fractional variational problems’, Mathematical Analysis and
Applications 272, 2002, 368–379.
32. Agrawal, O. P., ‘General formulation for the numerical solution of optimal control problems’, International Journal of
Control 50, 1989, 627–638.
33. Lorenzo, C. F. and Hartley, T. T., ‘Initialized fractional calculus’, International Journal of Applied Mathematics 3, 2000,
249–265.
34. Deithelm, K., Ford, N. J., and Freed, A. D., ‘A predictor-corrector approach for the numerical solution of fractional differential
equations’, Nonlinear Dynamics 29, 2002, 3–22.
35. Butzer, P. L. and Westphal, U, ‘An introduction to fractional calculus’, in Applications of Fractional Calculus in Physics, R.
Hilfer (ed), World Scientific, New Jersey, 2000, pp. 1–85.
36. Agrawal, O. P. and Saigal, S., ‘A novel, computationally efficient approach for hamilton’s law of varying action’, International
Journal of Mechanical Sciences 29, 1987, 285–292.
```