Uploaded by Nommie XX

Artstein 1983 Stabilization with relaxed controls

advertisement
.Vooniinear Analwu.
Tkeory.
Pnnred in Great Bnrain.
.Uerho& & Applicanonc.
Vol. 7. No. 11. pp. 1163-1173. 1983.
STABILIZATION
WITH RELAXED
0362-546Xg3
$3.00 + .OO
@ 1983 Pergamon Press Ltd.
CONTROLS*
Zvr ARTSTEIN?
Department
of Theoretical Mathematics, The Weizmann Institute of Science, Rehovot 76100, Israel
(Received
Key words and phrases: Stabilization,
in revised form 1 December
1982)
relaxed controls, Lyapunov functions.
1. INTRODUCTION
NONLINEAR
systems of the form
i = f(x, u)
(1.1)
cannot in general be stabilized using a continuous closed loop control U(X), even if each state
separately can be driven asymptotically to the origin. (An example is analyzed in Section 2.)
In this paper we examine the possibility of stabilizing such systems with a continuous closed
loop relaxed control. We find, indeed, that the family of systems stabilizable with relaxed
controls is larger than the family of those stabilizable with ordinary controls. An even larger
class is obtained if the continuity of the closed loop at the origin is not required. The latter
class includes all one dimensional systems for which states can be driven asymptotically to the
origin. This result does not hold in two dimensional systems and we provide a counter-example.
It should be pointed that relaxed control-type stabilization is used both in theory and in
practice; the method is known as dither. We shall comment on the similarities.
Lyapunov functions for the system (1) help us in the construction of the continuous closed
loop stabilizers. In fact, we find that the existence of a smooth Lyapunov function is equivalent
to the existence of a stabilizing closed loop which is continuous except possibly at the origin;
an additional condition on the Lyapunov function implies the continuity at the origin as well.
We present these results in Section 4, after a brief introduction of closed loop relaxed controls,
notations and terminology in Section 3. Prior to that, in Section 2, we discuss an example
illustrating the power of relaxed controls. In the particular case of systems linear in the
controls, relaxed controls can be replaced by ordinary controls, this is discussed in Section 5.
The role of Lyapunov functions in the stability and stabilization theories is of course well
known. Examples of systems with Lyapunov functions are available in the literature. We
display some in Section 6, along with general comments on the construction, applications and
counterexamples,
including one which cannot be continuously stabilized, yet possesses a
nonsmooth Lyapunov function.
Closed loop stabilization with ordinary controls is analyzed extensively in the literature, see
Sontag [8], Sussmann [ll] and references therein. Lyapunov functions techniques in stabill The research
was supported by the National Science Foundation under Grant MCS8102079.
: Part of this work was done at the Department of Mathematics, Northeastern University, Boston, MA 02115,
U.S.A.
1163
ZCI ARTSTED
1164
ization and regulation can be found in Aizerman and Gantmacher [l] and Barbashin [2]. The
theory of relaxed control, primarily in connection with optimization, is developed in Warga
[12]. Dither techniques are analyzed in Zames and Shneydor [ 131.
2.
AN EXAMPLE
This section is aimed primarily at those unfamiliar with relaxed controls and their role. We
construct a one dimensional control system which cannot be stabilized with a continuous
closed loop of ordinary controls, yet a relaxed control stabilizer does exist. Systems which
cannot be continuously stabilized with ordinary controls are provided by Sontag and Sussmann
[lo], where such one dimensional systems are characterized.
Example 2.1. Let x and u be scalars and
x = g(u)
(2.1)
where g(u) = 1 - u if u 2 1, g(u) = -1 - u if u < -1 and g(u) = 0 otherwise. Each state can
be driven asymptotically to 0, even by using a closed loop, e.g. u(x) = x A 1 if x > 0 and
u(x) = x - 1 if x < 0. The discontinuity at 0 cannot be overcome, since u(x) must be greater
than 1 for x > 0 and less than -1 for x < 0.
We allow now the use of the following, seemingly peculiar, behavior of the control. The
control is allowed to oscillate, or chatter, very rapidly between the values. say, 2 and -2.
Furthermore. the proportion of time spent at each of these two values might vary and depend,
in a closed loop form, on the state x. (Such a control law seems terribly discontinuous, but
it is not, in a sense which we explain later.) If in a certain instance the control assumes the
value 2 with the portion p (0 s p < 1) of its time, and the portion 1 -p is spent at the value
-2, the right hand side of (2.1) in that instance becomes
pg(2) + (1 - p)g( -2)
= 1 - 2p.
We let p depend on the state x according to the following rule:
The differential
p(x) = l/2 + x
if 1x( S l/2
P(X) = 1
ifx 2 l/2
P(X) = 0
ifx < -l/2.
equation generated
by this closed loop is
i=-?J
if jx/ s l/2
i = -x/Ix]
if 1x12 l/2
and it is clearly globally asymptotically stable.
Notice that only two values of the controls participated in the process. In particular, no use
was made of the fact that g(uo) = 0 for a certain ordinary control UO.
We see that the stabilizing closed loop suggested above is continuous. even Lipschitz
continuous, in the parameter p(x). The point is that in many practical cases it is not difficult
to generate a highly oscillatory control signal; then the continuity of the parameter p(x)
reflects actual continuity of the closed loop. As was mentioned before, such techniques, known
as dither, are used in practice, see Zames and Shneydor [13], [14] and references therein.
Stabilization
3. CLOSED
LOOPS
with relaxed
WITH
controls
RELAXED
1165
COKTROLS
Relaxed controls were developed primarily within the framework of optimal control theory,
see Warga [12]. Here we use them in stabilization. In this section we briefly review the
concepts and display the notations and terminology.
The state variable x in (1) belongs to the n-dimensional euclidean space R”. Its norm is
denoted 1x1 and x. y denotes the scalar product of n and y. A dot above a variable denotes
differentiation with respect to time. The gradient of a function V is denoted gradV. The
control variable u in (1) belongs to a metric space CJ’.
We assume throughout that f(x, u): R” X U+ R” is a continuous function.
A relaxed control is a probability measure, say u, on the space U. The family of relaxed
controls is denoted by U,Q.Two topologies will be considered on UR. The first is associated
with weak convergence of measures, i.e. uk-$ 00 if Ih dVk-+ Ih duo for any continuous and
bounded h:U- R. The second is the norm topology, generated by the variational norm
IlUll=SUp{~lU(E,)l:Ei
form a finite partition of U} (Ei C U are always assumed Borel.) For
details see Billingsley [4] or Warga [12].
If at a certain state x a relaxed control u is used, the right hand side of (1) assumes the value
(3.1)
The ordinary control u can then be identified as the relaxed control having full measure
assigned to the singleton {u}. We do not distinguish between the two in the sequel.
Unfortunately the mapping f(.~, u) might not be defined for every pair (x. v). and when
defined it might not be continuous in either of the variables, even with respect to the norm
topology. Joint continuity in (x, u) is actually the desired property, and additional conditions
(e.g. compactness of U or local compactness of U plus boundedness off on sets B x U with
B bounded) would result in a well defined (3.1) and joint continuity off, with respect to weak
convergence on UR.
We are interested in closed loop schemes of the form
and the resulting differential
u(x): R”+ UR
(3.2)
=f(x, u(x)).
(3.3)
equation
i
As was indicated, even if u(x) is smooth the equation (3.3) might have a discontinuous right
hand side, unless further conditions are imposed. In this paper, rather than imposing one of
the aforementioned
conditions, we will consider only those closed loops U(X) Lvhich are
continuous (with respect to one of the topologies on UR) and which produce a continuous
right hand side f(x, U(X)). H owever, in general, the continuity of both u(x) andf(s. U(X)) will
not be required at x = 0, for a reason which we now explain.
An ordinary differential equation i = h(x) is asymptotically stable if there is a neighborhood
W of 0, a continuous function b(r): [0, =)* [0, 3~) with b(0) = 0 and a continuous nondecreasing function T(&):(O, 30) ---, (0, =) such that whenever x(t) is a solution and s(O) E IV then
lx(t)\ s b(lx(O)l) for TV 0 and ]x(T(&))I s E. The equation is globally asymptotically stable if
it is asymptotically stable and the neighborhood W can be chosen to be any bounded set in
R”.
Zvr ARTSTEIN
1166
If X = h(x) is asymptotically stable and h(x) is continuous except possibly at x = 0, yet
h(O) = 0, then solutions satisfying x(0) E W are defined for all f 2 0 and are continuously
differentiable except possibly at the first time t for which x(t) = 0. Namely, in case of asymptotic
stability, discontinuity of h(x) at x = 0 (with h(O) = 0) produces no problem in the definition,
existence and continuation of solutions.
4. STABILIZATION
AND LYAPUNOV
FUKCTIO.US
In what follows and throughout, we say that (1) is stabilizable by a closed loop
control if a mapping u(x) from a neighborhood of 0 into UR exists, continuous with
to the weak convergence except possibly at x = 0, such that f(x, u(x)) is continuous
possibly at x = 0 and such that (3.3) is asymptotically stable. Global srubilizarion is
similarly. We assume throughout that f(0, 00) = 0 for a certain relaxed control ug.
relaxed
respect
except
defined
THEOREM 4.1.
The system (1) is stabilizable
by a closed loop relaxed control if and only if
there is a C’ function V: W-+ R, defined on a neighborhood W of 0 such that V(0) = 0.
V(x) > 0 if x f 0 and for x + 0
inf gradV(x) .f(x, u) < 0.
UEU
(4.1)
The system is globally stabilizable by a closed loop relaxed control if and only if W in the
previous statement can be chosen R” with V(x) + r: as 1x1--, =.
Furthermore, in either case, V can be chosen C” and the closed loop u(x) can be chosen
piecewise linear on bounded sets bounded away from 0.
Proof. Suppose first that (3.3) is asymptotically stable or globally asymptotically stable,
with f(x, u(x)) continuous except at x = 0. If global asymptotic stability holds then
f(x, u(x)) # 0 for all x, if asymptotic stability holds thenf(x, u(x)) # 0 for x near 0; otherwise
there is a rest point of (3.3) within the basin of attraction of 0. Let a(x) be a scalar function
defined for x # 0, positive and continuous and such that (~(x)f(x, u(x)) - 0 as x+ 0; e.g.
4x) = Ix / lf(x, u(x)) I-‘. The differential equation
x = @>f(x, u(x)>
(4.2)
has now a continuous right hand side. Since (u(x) > 0 it follows that (4.2) and (3.3) share the
same phase portrait; in particular (4.2) is asymptotically stable or globally asymptotically
stable in accordance with (3.3). The converse Lyapunov function as constructed in Kurzweil
[7] provides a C” function V: W +- R (with W = R” in case of global asymptotic stability) with
V(0) = 0, V(x) > 0 if x # 0 (V(x)+ m as 1x1-+ = in case of global asymptotic stability) and
such that
gradV(x) . a(x)f(x,
u(x)) < 0
for x # 0. (Unfortunately,
from the introduction to Kurzweil [7] one can get the impression
that the solutions of the equation are assumed to be uniquely determined by initial conditions.
This is not correct and in fact, as indicated in the paper itself, one of the goals of Kurzweil’s
beautiful construction is to avoid such an assumption.) Since a*(x) is positive it follows from
the displayed inequality that already
gradV(x) .f(x, u(x)) < 0.
Stabilization with relaxed controls
1167
Since f(x, u(x)) is defined as an integral, see (3.1), it follows that at least for one u E U
gradV(x) .f(x, u) < 0
and (4.1) is checked. This completes one direction of the statement.
Suppose now that a CLfunction V: W ---, R satisfying the conditions of the statement is given.
For x # 0 in W we denote by F(x) the collection of all relaxed controls u for which f(x, u) is
defined and gradV(x) *f(x, u) < 0. Then F(x) is convex and contains at least one ordinary
control which we denote by u(x). Since V is C’ and f(x, u) is continuous it follows that for
x fixed u(x) E F(y) if y is close enough to x, say if y is in the open ball B(x) around x. Since
w\(O) is locally compact it follows that a sequence Bi , Bz, . . . of such balls exists which covers
the set W\(O) such that each point x belongs to only a finite number of them. An elementary
consideration would then show that a continuous positive function d(x) exists on W\{O} such
that the ball {y : Iy - XI G d(x)} is included in one of the balls Bi. Let zl, zz, . . be a
denumerable number of points in W\(O) which generate a triangulation of W\(O) (see e.g.
Hocking and Young [5]) with the property that each simplex in the triangulation which contains
Zj has a diameter less than d(zj)/2. It is not hard to construct such a triangulation. Recall that
each Bi is the ball B(xi) such that pi = u(x,) belongs to F(y) if y E Bi. We assign now to each
vertex zj one of the ordinary controls ui for which a ball of radius d(zj) around z, is included
in Bi. The definition of d(x) implies that such Ui exists. Suppose now that a point x belongs
to a simplex in the triangulation generated by z,, , . . . , Zjks The function u(x) is defined to be
the measure with weight pjL assigned to Uj, when pi, , . . . , pi,, are the barycentric coordinates
of x in the simplex. Then u(x) is piecewise linear on compact sets bounded away from zero.
The continuity of f(x, u) implies that f(x, u(x)) 1s
. continuous except possibly at x = 0. Also,
since Ix -zj,l =S d(zj,)/2 it follows that rlj, E F(x) and the convexity of the latter implies that
V is a Lyapunov function for (3.3). A
u(x) E F(x). In particular gradV(x) *f(x, u(x)) < 0 I.e.
.
standard argument shows that iff(0, UO)= 0 for a certain 00 then (3.3) is asymptotically stable,
with region of attraction that contains at least the set {x:V(x) < ao} with no = min{Vcv):y in
the boundary of W); and (3.3) is globally asymptotically stable if W = R”. This completes the
proof.
Continuity of the closed loop at the origin might not hold even for seemingly simple systems.
Here is an example.
Example 4.2. Let x be scalar and U = (0,1). Let f(x, 0) = -x if x 2 0 and f(x. 0) = 3x if
x G 0. Let f(x, 1) = -f(-X, 0). The system is stabilizable with a closed loop relaxed control,
e.g. u(x) = 0 if x L 0 and u(x) = 1 if x < 0. The discontinuity at 0 cannot be overcome since
unless the measure given by u(x) to 0 for x > 0 is at least l/4 and for x < 0 is at most l/4 the
system would not be stabilized. Extending f(x, U) to u E [0, l] by f(x, n) = cuf(xt 1) + (1 cr)f(x, 0) yields the same consequences.
We shall provide conditions for the existence of a stabilizing closed loop which is continuous
at x = 0 with respect to a given metric on UR. We say that a metric or a semi-metric p(. , -)
on UR is conuex if the sets {u:p(u, UO)< E}, for u. fixed, are convex. An example is the norm
metric, which, as it is easy to see, is finer than any convex semi-metric. Another example is
the Prohorov metric p(uo, ul) defined as the minimal E B 0 for which uo(A) s ul(A ‘) - E and
Q(A) c uo(A “) + E for all measurable A C U, and when A’ denotes the s-neighborhood
of
A in U. The Prohorov metric is equivalent to the metric of weak convergence whenever the
1168
ZvI ARTSTEIN
latter is metrizable, e.g. if U is separable. For details see Billingsley [4, appendis III]. Other
metrics or semi-metrics might have relevance to the system.
THEOREM
4.3. Suppose that lJ is separable and complete and let p( ., .) be a convex semimetric on iJR. There exists a stabilizing (or globally stabilizing) closed loop u(x) which is p
continuous at x = 0 with f(x, u(x)) continuous if and only if there is a C’ function V satisfying
the conditions of theorem 4.1 and in addition the following property holds: Au0 E I/R exists
with f(0. uo) = 0 and such that for every E > 0 there is a 6 > 0 such that Ix/ < 6 implies the
existence of ul with p(uO, ui) < E and ]f(x, ui)i < E and gradV(x) *f(x, oi) < 0.
Furthermore V can be chosen C’ and u(x) can be chosen piecewise linear at s f 0.
Proof. The construction of the Lyapunov function in the proof of theorem 1.1 covers the
present case, with the additional conditions on V being implied by the p-continuity of u(x)
and the continuity of f(x, u(x)).
Suppose now that a function V is provided satisfying the conditions of the theorem. As in
the proof of theorem 4.1 let F(x) denote the set of relaxed controls L: for which
gradV(x) .f(x, u) < 0. Then F(x) is convex, nonempty and contains an element u,(x) such
thatx-, Othenp(ui(x), ~a)-+ Oandf(x, ui(x)) + 0. This follows from the additional condition
on V. Furthermore, u,(x) can be chosen to have a compact support. This follows from the
separability and completeness of U, which makes every measure tight (see Billingsley [4
appendix III]), so that even if ui(x) does not have, originally. a compact support, its restriction
to a large enough compact set will satisfy the requirements. From now on we proceed exactly
as in the proof of theorem 4.1, with ul(x) replacing the ordinary controls u(x). The compact
support of each u,(x) implies indeed that u,(x) E F(y) if y is close enough to x. so the balls
B(x) are well defined. The piecewise linear closed loop u(x) then stabilizes (3.3) and the choice
of ui(x) together with the convexity of the semi-metric p( a, .) imply that p(u(x). UO)- 0 and
f(x, u(x)) -+ 0 as x--, 0. This completes the proof.
Remark
4.4. If the existence of ul with p(ul, uo) < E in the preceding result is dropped, yet
]f(x, ut) 1< E is maintained then the closed loop u(x) might be discontinuous but the resulting
differential equation f(x, u(x)) 1s continuous. Similarly, if p(ui, uo) < E can be obtained but
without If(x, ul)] < E then the closed loop can be chosen continuous when f(x. u(x)) might
not.
Remark 4.5. The stabilizing rule that we have constructed has the form u(x) = Z:p,(x)ui where
pi(x) form a partition of unity with respect to the balls B,. Note that any partition of unity
would suit, in particular one which replaces the piecewise linear pi(x) by C” functions, which
might be desired property. (I owe this comment to W. Fleming.)
5.
SYSTEMS
LINEAR
IN THE
CONTROL
A major role in the previous results was played by the linearity of f(x, u) in the relaxed
control u. If the system is linear in the control to begin with, then there is no need to refer
to relaxed controls, as the results of this section show. (Notice, however, that these results
are not included in the previous case, since the closed loops u(x) of relaxed control were norm
continuous and an ordinary control closed loop n(x) is, unless constant, never continuous
with respect to the variational norm.)
Stabilization with relaxed controls
1169
In the following, U is a convex set in a Banach space (e.g. in R”) and the system has the
form
x = fl(X) + fi(X)U
with fi :R” +
U to R” (if
continuous.
are, clearly,
(5.1)
R” continuous and fi: R” - B(U), the latter denotes the linear functionals from
UC Rm then fi(x) is an n x m matrix). We only require that (x, u) + fJx)u be
We assume that fi(0) +f2(0)u0 = 0 for a certain KO E U. All bilinear systems
included.
5.1. There exists a closed loop u(x):R + U, continuous except possibly at x = 0,
and which makes (5.1) asymptotically stable if and only if a neighborhood W of 0 exists and
a C’ function V: W-, R with V(0) = 0. V(x) > 0 if x # 0 and gradV(x) *f(x, rt) < 0 for at least
one u E U if x # 0. Such a global stabilizer exists if and only if W in the previous statement
can be chosen R” and V(x)+ 3~ as 1x1-P CJC.In either of the cases V can be chosen C” and
U(X) can be chosen piecewise linear on compact sets bounded away from 0.
THEOREM
Proof. We can either follow the proof of theorem 4.1 when the linear set U replaces the
(always) linear set UR, or note that if u(x) is the closed loop relaxed control guaranteed by
theorem 4.1 then U(X) = Iudu(x)(u) is the desired ordinary control loop. This takes care of
the “if” part. The “only if” direction is actually included in theorem 4.1 since u(x) continuous
as an ordinary control implies u(x) continuous with respect to the weak convergence as a
relaxed control, also, then, fr(x)u(x) is continuous when u(x) is.
5.2. The closed loop U(X) in theorem 5.1 can be made continuous at x = 0 if and
only if the Lyapunov function V can be chosen with the following additional property: a
uo E U exists with f(0, ~0) = 0 and with the property that for every E > 0, a 6 > 0 exists such
that whenever 1x1< 6 the inequality gradV(x) .f( x, ul) < 0 holds for a certain u1 with 1~1
- uo] < E. (Here (uI - uoI stands for the distance in U.)
THEOREM
Proof. Again we can either imitate the proof of theorem 4.3 or note that if u(x) is the
relaxed closed loop guaranteed by it when u,-,= uo then u(x) = Iudu(x)(LO is the desired
ordinary closed loop.
Remark. The referee brought to my attention that results similar to those of this section are
provided in Jacobson [6, theorems 2.5.1 and 2.5.21. Jacobson works with equations of the
form i = B(x)u, with u E R” unrestricted. The sufficient condition is then the existence of
a C’ function V with V(0) = 0, V(x) > 0 if x # 0 and such that grad V(x) * B(x) does not vanish
for x # 0. In this setting a stabilizing control is simply u(x) = -gradV(x) . B(x). Such a control
might not be allowed in our framework. (Also, Jacobson presents the necessary condition for
only smooth equations, but this can be easily corrected by applying to Kurzweil’s results on
the existence of Lyapunov functions rather than to the results referred to by Jacobson.)
6. CO.MMENTS,
Remark
EXA.MPLES
AND COUNTEREXAMPLES
6.1. The closed loop controls u(x) in the constructions of the theorems are piecewise
linear (in UR or U) therefore locally Lipschitz with respect to the norm (on UR or respectively
U). This does not mean that the resulting differential equation (3.3) has a Lipschitz right hand
1170
ZVI
6iRTSTEI.U
side. As a counterexample
consider an equation i = f(x, ~0) with a singleton I/ = {rca}and
a non-Lipschitz function f. However, if f(x, u) is Lipschitz then the closed loops of our
theorems provide a locally Lipschitz f(x, u(x)).
Remark 6.2. Notice that theorem
4.1 provides a closed loop u(x) such that every measure is
concentrated at most on n + 1 points, n being the dimensionality of the state space. The same
phenomenon occurs in applications of relaxed control to optimization (Berkovitz [3] develops
the entire theory of relaxed controls with only such controls). This is not the case in theorem
4.2, unless the particular control ug is known to be supported at m points; then the approximations u1 can also be picked to have this property and v(x) is supported at (n + 1)m points.
Counterexample 6.3. The system analyzed in Section 2 can be stabilized with a closed loop
ordinary control u(x) which is discontinuous only at x = 0. Take U(X) = 2 if I > 0 and
u(x) = -2 if x < 0. This is not always possible, even if such stabilizing closed loops with
relaxed control exist. Here is the counterexample.
Let x and u be scalars, let f(x, u) =
-x(sin(x-‘))*n
if u 2 0 andf(x, u) = -x(cos(x-l)) ‘I( if ~1 s 0. With the aid of theorem 4.3 and
the Lyapunov function u(x) = x2 it can easily be seen that a continuous closed loop relaxed
control which stabilizes the equation exists, with u(O) = 0. Furthermore (see remark 6.2) each
relaxed control u(x) would be concentrated at two points. However, any closed loop of
ordinary controls U(X) which is continuous at x f 0 would produce rest points (i.e. points x
with f(x, U(X)) = 0) arbitrarily close to 0, hence it cannot stabilize.
The constructive argument of the preceding remark can be carried over to all one dimensional
state systems, as follows. (By a piecewise continuous function we mean one for which the one
sided limits exist at points of discontinuity.)
PROPOSITION6.4.
Let x be scalar and suppose that for any state x0 in a neighborhood of 0
there is a piecewise continuous control function u(r) such that X: =f(x, u(t)) has a unique
solution x(t) with x(0) = x0 and x(r) + 0 as r+ =. Then there is a stabilizing closed loop with
relaxed controls, continuous except possibly at x = 0. Global stabilization occurs if W = R.
Proof. We claim that V(x) = x’ is a Lyapunov function as required in theorem 4.1. Indeed,
if for xl E W and x1 # 0 the relation 2xlf(xi, rc) 2 0 for all U, then x&x,, et(t)) 2 0 for all n(t)
which drives x0 asymptotically to 0 with jxol > (x1(. But then i = f(x, u(t)) has more than one
solution, the bifurcation point being at xi, A contradiction.
The uniqueness demand in the previous result is needed, since it is not hard to construct
an example i =f(x) which has rest points arbitrarily close to 0 yet there are solutions
x(r) + 0 with x(0) = x0 arbitrary. The regularity requirement on u(t) is also needed since, in
general, the solutions of i = f(x, u(t)), with u(t) measurable are differentiable and satisfy the
equation only almost everywhere. It is therefore possible to construct phatological examples
where 2xif(xi, u) > 0 for a certain x1 and all U, yet the solution x(t) (which necessarily does
not satisfy the equation at xl) tends to 0 as f+ 3~. The consequence of proposition 6.4 does
not hold when x is two dimensional, as shown in the following example.
6.5. Let x = (5, n) be in R’. Consider in R’ the family of circles
by the equations f’ + (q - a)’ = a’ for a # 0, (i.e. with radius Jaj and center at
Counterexample
determined
Stabilization
with relaxed
controls
1171
(0, a)). (See Fig. 1.) Let T(x) be a unit vector, tangent at x to the circle to which x belongs
and such that T(x) is continuous on R’ except at x = 0 (at x = (r, 0) the vector T(x) is parallel
to the E-axis). See the arrows in Fig. 1. We define the system for u E [- 1, l] by
f(x, U) = exp( - /x I-?) T(x)u.
(6.1)
Fig. 1.
Thenfis continuous, even smooth. Any given state can be driven asymptotically to 0. by even
using a closed loop discontinuous control which stabilizes the system. This is obtained for
instance by taking u(x) = -1 if E> 0 and u(x) = 1 if t s 0. A continuous (even except at
x = 0) stabilizing closed loop, relaxed or ordinary, does not exist. Indeed, since evev circle
is invariant under any equation of the form X =f(x, u(x)), the stability of such an equation
implies the existence of a rest point other than 0 on any of the circles, hence arbitrarily close
to 0.
Our characterization (theorem 4.1) implies that the system (6.1) does not possess a C’
Lyapunov function satisfying (4.1). Interestingly enough, a Lipschitz-continuous
Lyapunov
function does exist, whose typical level surface is indicated by the dashed line in Fig. 1. (The
condition gradV(x) *f(x, u) < 0 should be replaced by the obvious analog.) We see therefore
that in control systems nonsmooth Lyapunov functions might not be smoothened, as the case
of ordinary differential equations is, see Kurzweil [7 theorem 31.
Note. E. Sontag has kindly shown me a draft of [9]. There the existence of a Lyapunov
function for systems of the form (1) is established under the condition that each initial point
can be driven asymptotically to the origin. Sontag does not look for smooth Lyapunov functions
and indeed our counterexample 6.5 shows that there might not exist a smooth Lyapunov
function. The existence of a Lipschitz Lyapunov function in counterexample 6.5 is: however,
not a coincidence, but guaranteed by Sontag’s result.
Example 6.6. The system a! = xu2 - U’ for x and u scalars can be stabilized by a continuous
closed loop ordinary control. For instance if U(X) = sgn(x)lx]” the resulting equation is
asymptotically stable. A stabilizing continuous closed loop which is Lipschitz continuous does
not exist, as then xu’(x) will dominate U’(X) for U(X) small, and since u(O) = 0 is a necessary
1172
Zvr ARTSTEN
condition, the equation would not be stable. (See Sontag and Sussmann [lo] for a discussion
concerning Lipschitz closed loops.) Within the class of relaxed controls it is rather easy to find
a Lipschitz closed loop stabilizer, along the idea of Section 2.
Example 6.7. We wish to illustrate the sufficiency of the Lyapunov technique. Consider the
Lienard-type equation 1 + UX - 11’= 0 when both the friction and the restoring forces are
partially controlled, with the controls being lumped together. The associated system in R’ is
.r = y
j =
-KY
+
(6.2)
K3.
The goal is to stabilize the system. Consider a typical Lyapunov
equations, e.g.
function for Lienard-type
V(x, y) = fx’ + f(x + ,.)!
(6.3)
Then, it is easy to check,
gradV(x,y)
.f(x,y,K) =X(2y - Ky f 1~:)+-J'(J- Ky + Kj).
(6.4)
It is immediate therefore that if x and y have the same sign, i.e. xy 2 0. then (6.4) can be
made negative with the proper choice of K (large /K / with K having the opposite sign of x or
y). If x > 0 and y < 0 the control K for which 2y - ICY- 113= 0 will make (6.4) negative and
if x < 0 and y > 0 the solution of y - uy f 113= 0 will make (6.4) negative. In view of theorem
4.1 the system (6.2) is stabilizable by a piecewise linear (except possibly at 0) closed loop
relaxed control. In order to achieve continuity at the origin consider the relaxed control ug
equally concentrated on 1 and - 1. Then f(0, 0, UO)= 0. If u. is perturbed and the weights of
1 and - 1 are p and (1 - p) respectively, then the expression in (6.4) becomes
2P((X + Y) - Y(X + Y)) - ((x + f) - Y(2Y + 3x)).
(6.5)
If x and y have the same sign or if /x] G 1/2/y] then the leading term in (6.5) is 2p(x + y) (x + y) and for x and y small it can be made negative by a choice of p close to l/2. If
xy < 0 and Ix] L 1/2]y] then a direct computation shows that (6.5) is negative for p = l/2. We
can conclude from theorem 4.3 that the stabilizing closed loop of relaxed controls can be
made continuous at x = 0 with u(O) = uo and the continuity being with respect to the variational
norm. Moreover, the controls will chatter only between the values 1 and - 1. Simple considerations show that for x and y small there is an ordinary control K which makes (6.4)
negative and 1~1 is small. Therefore a stabilizing closed loop relaxed control exists with
u(0) = 0 and which is continuous at 0 with respect to the weak convergence.
REFERENCES
1. AIZERMANM. S. & GAXIMACHERF. R., Absolute Stability of Regulator Systems, Holden-Day, San Francisco
(1961). (English translation).
2. BARBASHINE. A., Inrroducrion to the Theory of Srability, Walters-Noordhoff,
Groningen (1970). (English
translation).
3. BERKOVITZL. D., Optimal Control Theory, Springer, New York (1971).
4. BILLINGSLEYP., Convergence of Probability Measures, Wiley, ;Vew York (1968).
5. HOCKISGJ. G. & YOUNGG. S., Topology, Addison-Wesley, Reading (1961).
6. JACOBSOND. H., Extensions of Linear Quadratic Control, Optimizarion and Matrix Theory. Academic, Press.
New York (1977).
Stabilization with relaxed controls
1173
7. KCRZWEIL J.. On the inversion of Lyapunov‘s second theorem on stability of motion. Am. mar/t. Sot. Transl.,
Ser. 2, 24. 19-77 (1956).
8. SONTAGE., Nonlinear regulation: The piecewise linear approach. IEEE Trans. on Auromaric Conrrol AC-26,
346358
9. S0h-r~
(1983).
(1981).
E., A Lyapunov-like characterization
of asymptotic controllability. S/AM J. Conrrol Oprim. 21. 162471
10. SO~~TTAG
E. & SUSSMAXNH., Remarks on continuous feedback, Proc. 19rh IEEE Conf on Decision and Conrro[.
pp. 916-921 (1980).
11. SL’SSMASNH., Subanalytic sets and feedback control, J. dvf Eqns 31. 31-52 (1979).
12. WARGA J., Optimal Control of Differential and Functional Equations. Academic Press. New York (1972).
13. Z;\.LIESG. & SHNEYDORN. A.. Dither in nonlinear systems. IEEE Tram. on Automaric Control AC-21. 660-667
(1976).
13. Z,UES G. & SHNEYDORN. A., Structural stabilization and quenching by dither in nonlinear systems. IEEE
Trans. on Automatic Control. AC-22. 352-361 (1977).
Download