A Con v ex

advertisement
A Convex Programming Approach
to the Multiobjective H =H1 Problem
2
1
Seddik M. Djouadi ,
1
2
C.D. Charalambous ,
Systems Eng. Dept., University of Arkansas, Little Rock, AR 72204, 2 SITE, University of Ottawa,
Ontario K1N 6N5, Canada, 3 AFRL, Wright-Patterson Air Force Base, Dayton, Ohio 45433
msdjouadi@ualr.edu, chadcha@site.uottawa.ca, daniel.repperger@he.wpafb.af.mil
In this paper, Banach space duality theory for the multiobjective H 2 =H 1 problem developed
recently by the authors, is used to develop algorithms
to solve this problem by approximately reducing the
dual and predual reperesentations to nite variable convex optimizations. The Ellipsoid algorithm is then applied to these problems to obtain polynomial-time, nonheuristic programs which nd \nearly" optimal control
laws. These algorithms have been implemented numerically to compute an example.
a convex function in Q. The problem (1) is innite
dimensional in the sense that there is no nite limit
to the number of parameters required to specify Q.
However, (1) can be approximated by a nite variable
convex optimization by restricting Q to lie in the space
Pm consisting of analytic trigonometric polynomials
of
degree at most m of the form a0 + a1 z + + am z m with
real coeÆcients, and then discretize the unit circle sufciently nely with respect to m. This yields a convex
optimization problem in the variables a0 ; a1 ; ; am .
For any xed m, these convex problems generate upper
bounds for o and suboptimal control laws, since Q
is conned to a proper nite dimensional subspace
of H 1 . Such problems are standard applications
of convex programming techniques. The technique
that will be used here is the Ellipsoid algorithm
[6]. This algorithm has the advantage of simplicity,
robustness and polynomial time execution. However,
it can be slower than other interior point methods.
The Ellipsoid algorithm is chose primary for its
non-heuristic stopping time [7], that is, for a specied
tolerance the algorithm terminates only when the estimate of the optimum is within of the true minimum.
Abstract -
1 Introduction
In [1] the authors characterized the solution of the multiobjective H 2 =H 1 using Banach space duality theory.
Dual and predual spaces were identied, and equivalent
maximizations formulated therein. In particular existence of at least one optimal controller was deduced,
and shown to satisfy a atness property together with
an extremal identity. Subsequently, similar duality results were also proposed in [2].
Predual and dual representations of the multiobjective
H 2 =H 1 comprise maximizations or suprema, in contrast to the inmum of the original form, which corresponds to the Pareto optimal solution for the H 2 =H 1
problem [3, 4, 1]
o
N
X
= Q2H 1inf(C
3
D.W. Repperger
p q)
i=1
kHi
Ui QVi
k +
L
X
2
N +1
kHi
Ui QVi
k1
where the matrix transfer functions Hi , Ui , and
Vi are stable and of appropriate dimensions, i.e.,
Hi 2 H 1 (C ni mi ), Ui 2 H 1 (C ni p ), and nally V 2 H 1 (C pmi ). Q is the Youla parameter
Q 2 H 1 (C pq ). Each term in (1) represents a particular closed loop ni -input/mi -output transmission from
particular exogenous disturbances to particular desired
signal outputs [5, 4].
The results of [1] lead naturally to a dual pair of
numerical solutions, which approach the optimal
o from opposite directions, and have the virtue of
producing estimates of o together with tolerance on
these estimates. The rst of these solutions, which
will be
referred to as the \primary",
exploits the fact
P
P
that Ni=1 kHi Ui QVi k2 + LN +1 kHi Ui QVi k1 is
!
The second or \dual" solution exploits the representation of (1) in the predual in terms of the following
maximization
(1)
00
11
H1
BB . CC
o =
sup @@ .. AA
2? S;kk1 H
(2)
L
where ? S is a preorthogonal subspace in the predual, and () is a linear functional to be dened in the
sequel. The dual solution as represented by (2) is therefore a convex maximization. The subspace ? S can be
restricted to a nite subspace consisting of trigonometric polynomials, which after suÆciently ne discretization of the unit circle, the resulting nite variable constrained yields lower bounds for the optimization represented by o . It can be shown that as m ! 1,
the upper and lower bounds obtained by this procedure converge.
The multiobjective H 2 =H 1 control problem has received a large attention in the control community, e.g.,
[8, 5, 9, 4, 10, 11, 12, 13] where several relaxations and
approximation methods of bounds have been proposed.
Similar ideas to here using nite dimensional approximations based on truncating the Youla parameter Q,
system impulse response and/or innite horizons were
proposed in [9, 4, 13, 10]. However, the approximations
may lead to a controller not feasible for the true closed
loop system, and very large optimization problems.
An improvement was suggested in [4] by formulating
the problem as a semi-denite programming problem
(SDP), using the linear matrix inequalities (LMI) formulation of H 2 and H 1 norms [14, 5, 11, 12]. The
drawbacks of this approach is that there is no analysis on the rate of convergence to the optimum, or the
degree of suboptimality. Moreover the LMI formulation introduces a large number of auxiliary variables
[4, 11, 12].
0
(7) is the distance from B
@
0
.
space S := B
@ ..
L
X
=1
Expression (1) is equivalent to nding the shortest
distance from a vector to a Banach subspace, dened as follows. Let B be the Banach space con
Q
sisting of the Cartesian product Ni=1 L2(C ni mi ) Q
L
1 (C ni mi ) consisting of L matrix-valued
L
i=N +1
(ni mi ) functions on the unit disk, endowed with the
following the norm:
B
@
kF kB :=
N
X
kFi k +
L
X
2
i=1
i=N +1
kFi k1 ;
F
=B
B .
@ ..
C
C
A
(3)
FL
where k k2 and k k1 denote the H 2 and H 1 norms
respectively, that is:
s
kFi k2 =
Z 2
0
T r Fi (ei )? Fi (ei ) dm
(4)
represents the normalized Lebesgue measure dened on the unit circle, and
kFi k1 = ess sup jFi (ei )j
(5)
dm
2[0;2)
where j j denotes the largest singular value. Then (1)
is equivalent to:
o
=
0
1
H1
B . C
inf
@ .. A
Q2H 1 (Cpq ) H
0
B
@
U1
..
.
1
0
C B
AQ@
UL
L
V1
..
.
VL
1
C
A
(6)
B
By pre-multiplying the terms in the LHS of (6) by U1?i ,
, ULi? , the adjoints of the inner factors? of respectively
?
U1 , , UL and post-multiplying by V1ci , , VLci
, the
adjoints of the co-inner factors of respectively V1 , ,
VL , the H 2 and H 1 norms are preserved, and then (6)
is equivalent to:
o
= Q2H 1inf(C
0
B
B
@
q) p
U1?i H1 V1?ci
..
.
? H V?
ULi
L Lci
1
0
C
C
A
B
B
@
U1o
..
.
ULo
1
0
C
B
CQB
A
@
V1co
..
.
VLco
1
C
C
A
B
?
Uio
Uio
i
0
F1
B F2 C
..
.
?
?
ULi HL VLci
0
V1co
..
.
C 1
B
A H (C pq ) @
ULo
S
1
1
?
C
A
to the sub-
1
C
A
of
B
. To
VLco
ensure closedness of S the following assumption will be
made throughout:
PL
?
(A1) there exists Æ > 0 such that
i=1 Uio Uio Æ ,
PL
?
and i=1 Vico Vico Æ for all 2 [0; 2).
Under assumption (A1) there exist an outer spectral
factor 1 , andP a co-outer spectral factor 2 for
PL
L
?
?
i=1 Uio Uio , and
i=1 Vico Vico , respectively, i.e.,
2 Banach Space Duality Theory
0
1
U1o
?
U1i H1 V1ci
= ?1 1;
L
X
=1
?
Vico Vico
i
= 2 ?2
has then
the following equivalent description, let R :=
1
R1
..
.
C
A
2
QL
i=1 H
1 (C m p )
i
be the outer isometry
RL
0
1
U1o
.
whose range coincides with the range of B
@ ..
0
More explicitly
that
QL
R? R
i=1 H
R
has the form
=
R
= I , m a.e. Likewise let
1 (C qn ) be
i
B
@
T
C
A [15].
ULo
1 1
..
.
1
ULo 1
0
U1o
:=
B
@
T1
..
.
1
so
C
A
1
2
C
A
TL
the co-outer
isometry
whose range
0
1
coincides with the range of
B
@
0
V1co
..
.
C
A.
More explicitly
VLco
2 1 V1co
B
..
T has the form T = @
.
2 1 VLco
a.e. Then S = RH 1 (C pq )T .
1
C
A
so that T T ? = I , m
Dene the Banach space L1(C ni mi ) as the space of
C ni mi -valued functions, under the norm:
kF k1 :=
Z 2
0
ST rF (ei )dm;
F
2
L1 (C ni mi )
(8)
where ST r denotes the trace-class norm, i.e.,
X
ST r(F (ei ) := T race(F ? F (ei )) =
i (F (ei )) (9)
1
2
i
where i (F (ei )) are the singular values of F (ei )).
Q
be the Banach space Ni=1 L2 (C ni mi ) Q
L
1 (C n m ) under the norm:
L
i
i
i=N +1
(7)
kF kB? := max (kF1 k2 ; ; kFN k2 ; kFN +1 k1 ; ; kFL k1 ) (10)
Let
B?
0
F1
.
where F = B
@ ..
1
C
A
2 B? . Let ? S be the subspace of
FL
dened by:
? S := (I RR? )B? T ? RH 1 (C pq )T
(11)
o
where denotes the direct sum of subspaces.
Ho1 (C pq ) is the subspace of L1 (C pq ) consisting of
C pq -valued functions, analytic in the unit disk, and
satisfying:
B?
f 2 Ho1 (C pq );
1
Z 2
0
f (ei )dm
=0
(12)
The space H o (C pq ) is simply the subspace obtained
by taking the complex conjugate of all functions in
Ho1 (C pq ).
It turns out that B is the dual space of B? , and ? S is
the pre-orthogonal of S . Moreover the following holds
[1]:
If Q is the restricted to lie in the subspace Pm of analytic trigonometric polynomials of degree m, the resulting optimum is dened to be
U ? H1
U1o 1 1
1i
m
=
inf
Q
1
?
o
(16)
U2 i H 2
U2o 1
Q2H 1
B
1 1, then m 1. Therefore Q can
If H
o
H2 B
further be restricted to those polynomials in Pm which
satisfy the inequality (17) below, without aecting the
inmum in (16).
kU1?iH1
k2 + kU2?iH2
k1 1 (17)
U1o 1 1 Q
U2o 1 1 Q
If (17) holds then we have that
j(U2?i H2 U2o 1 1 Q)(ei )j 1 m a:e:
) kQk1 k1 U2o1 k1 + k1H2 U2o1 k1 (18)
Since Q is a degree m polynomial satisfying (18), then
by Bernstein's theorem [16] we conclude that kQ0 k1 mk1 U2o1 k1 + k1 H2 U2o1 k1 . Thus, to nd m
o it sufo
ces to search over the subset of Pm dened by (18).
Moreover the derivatives of all such Q are uniformly
0
bounded above by mk1 U2o1 k1 + k1H2 U2o1k1 . This
B
f
@
second point is important because the uniform boundk
k
edness of this derivative, coupled with the assumption
Lco
B
?
F 2 S
that H2 , U2o, U2?i H2 , are uniformly Lipschitz, allows
?
? ?
i
; (ULi HL VLci ) )F g (e )dm
(13) computation of (16) based on inspection of a suÆciently
ne partition of the unit circle. In particular, for each
For simplicity the numerical solutions, which are based
Q 2 Pm which satises (18), the function of described
on convex programming will be summarized here for the
by
SISO case, two block H 2 =H 1 optimization, that is, Hi ,
2
X
Ui , Vi , i = 1; 2; and Q are scalar valued functions in
j(Uii? Hi Uio 1 1 Q)(ei )j
(19)
1
H .
i=1
For convenience Hi , i = 1; 2, are normalized such that
kH1 k2 1; and kH2 k1 1.
is uniformly Lipschitz in , with Lipschitz constant
In the SISO case by \absorbing" the inner and co-inner
bounded above by
factors into common terms, o reduces to:
2
X
?
1
1
Lt :=
Lip(Uii Hi ) + MQ Lip(Uio 1 ) + kUio 1 k1 mMQ
U ? H1
U
1
o
1
i
o = min1 Q
(14)
i=1
U2?i H2
U2o
Q2H
B
Z 2
where Lip() denotes the least upper bounds of all uni
?
?
?
?
i
=
sup
((
U
H
)
;
(
U
H
)
)
F
(
e
)
dm
formly Lipschitz constants of its argument, and
1i 1
2i 2
kF kB? 1 0
MQ := k1 U2o1 k1 + k1 H2 U2o1 k1
(20)
F 2? S
0
1 0
1
?
?
U1i H1 V1ci
U1o
B
C B . C
..
= Q2Hmin
A @ .. A Q
.
1 (Cnn ) @
?
?
ULi HL VLci
ULo
1
V1co
Z 2
?
? ?
.. C
T race ((U1i H1 V1ci )
=
sup
A
.
0
F B?
1
V
t
3 Approximate Representation by Convex
Optimizations
3.1 The Primary Problem
To establish a mixed norm vector problem which approximates the primary optimization in (14) we use the
following representation
o
=
U ? H1
1i
min
U2?i H2
Q2H 1 U1o
U2o
Q
B
(15)
For a xed > 0, dene Ns := int 2L , where int
denotes the smallest integer greater or equal to an argument, and k := Nks for integers k 2 [1; Ns ]. Since
the function (19) is uniformly Lipschitz continuous, and
Q is uniformly bounded then
kU1?i H1 U1o 1 1 Qk2 + kU2?i H1 U2o 1 1 Qk1
0
@
Ns
X
1
Ns
k=1
j(U ?i H
2
2
j(
?
U1i H1
U2o
U1o
1
1
)(
ik
Q e
1 1Q)(eik )j
2[
)j
2
! 12
+ k2max
[1;N ]
s
]
; (21)
The uniform partition of the unit circle may be conservative, in the sense that non uniform partitions with
fewer points may be found for which (21) is also satised. This would result in less computations and memory usage. The uniform case is considered here for the
sake of clarity.
The optimization represented by m
o can now be reduced to a nite dimensional convex problem, with
an error smaller than in the following manner: Let
x 2 Rm+1 be the vector of coeÆcients of Q in increasing powers. Dene Sm the subset of Rm+1 to be the
convex set of all coeÆcients x of those
Q 2 Pm which
satisfy (18). The computation of m
o is, within accuracy
, approximately equivalent to
The constraint that F 2? S must satisfy kF kB? 1,
can be expressed as
k(I RR?)X + RY kB? 1
(26)
and the linear functional associated with H =
when evaluated at F can be expressed as
( )=
H F
Z 2
(U1?i H1 ; U2?i H2
0
?
(I
RR
?
)X + RY
H1
H2
dm
(27)
For X 2 Gm and Y 2 Hm we will show that the B?
norm of every element of ? S be less than or equal unity,
1
m
implies that the derivatives of each Xi and Y on the
(eik ) U o (eik )
:= inf
x2Sm
Ns
unit circle are uniformly bounded. The assumption of
Lipschitz continuity of Hi , Uio , Uii? Hi , 8i = 1; 2, enables
?
i
i
ir
+ k2max
j
(
U i H (e k ) U o 1(
e k)
e kj
(22)
the integral (26) and (27) to be approximated by certain
;Ns
r
weighted sums. Each integral represents the evaluation
of the functional represented by H , and the norm reExpression (22) can be expressed in the following vecstriction in (26) respectively. Similar arguments as in
tor form
section 3.1 show that expression (26) can be approxi!
1
1
mated by the following nite variable convex optimiza b
A
m
2
2
p
pN x Ns + kb A xklN1s (23)
1 = inf
tion
x2Sm Ns
s
l
0v
u
u
@t
1
[1
Ns X
?
U1i H1
k=1
2
]
1
2
2
1
2
m
X
xr eirk r=0
!!
m
X
1
1
=0
0
2
where b is the Ns dimensional column vector dened
by [bj ]k = Uji? Hj (eik ), Aj is the Ns (m + 1) matrix
dened by [Aj ]r;s = Ujo 1 1 (eir )ei(s 1)r , for j = 1; 2,
l2Ns is the standard Ns dimensional Euclidean space,
Ns
and l1
is the Ns dimensional vector space equipped
with the uniform norm.
max @
j
As pointed out previously the dual solution exploits the
predual Banach space representation, that is for SISO
closed loops
=
sup
kF kB? 1
?
F 2 S
Z 2
?
?
?
?
i
((
U
H
)
;
(
U
H
)
)
F
(
e
)
dm
1
2
1
i
2
i
0
(24)
The preorthogonal ? S in the SISO case takes the following form
? S = f(I
RR )X
?
+ RY :
X
1
2 B? ; Y 2 H o g (25)
1 is restricted to lie in the subspace
where X = X
X2
of B? consisting
of trigonometric polynomials of the
form a m z m + + a0 + a1 z + + a m z m , where
each coeÆcient is a L1 column vector, denoted by Gm .
On the other hand, Y is restricted to the subspace of
1
H o consisting m-dimensional anti-analytic trigonometric polynomials of the form a1 z 1 + + a m z m denoted Hm (m is assumed to be even). After suÆciently
ne discretization of the unit circle, the resulting nite
variable constrained convex optimization yields lower
bounds for o .
2
2
Ns
where
Ns
Ns X
?
(R2 X
k=1
Ns X
?
( R1 X
k=1
+ R1 Y )(e
ik
+ R2 Y )(e
ik
2
)
! 21
;
!
)
(28)
is some positive
integer, X =
P m
x e
, and Y = l= 1 yl eilk . The resultr= m r
ing problem is then a nite variable convex programming problem in the xr , r = 2m ; ; m2 , and yl ,
l = m; m + 1; ; 1. Likewise for (27)
Pm
2
Ns
irk
2
3.2 The Dual Problem
o
1
1
2
1
Ns
Ns
X
k=1
(U1i H1?(R2 X + R1 Y ) + U2i H2?(
?
R1 X
+ R2 Y )) (eik ) (29)
If x 2 Rm+1 , y 2 Rm are respectively chosen to be the
coeÆcients of X and Y in ascending power, then (28)
can be written
max
!
1
1
x
x
2
2
B B
Ns ; C C
Ns
y
y
l2
l1
(30)
where l1Ns is the Ns dimensional vector space equipped
with the standard l1 -norm. Likewise (29) can be written
T
EN
s
2
1
D D2
x
y
(31)
where ENT s is an Ns dimensional row vector, whose entries are all 1, and
? ik
[B 1 ]k;r = Rp2 (e ) e(ik )(r
(Ns )
C1k;r =
ik )
R?
(i )(r
1 (e
e k
Ns
m
2
m
2
1)
1)
[ 2]
; B k;s
[ 2]
; C k;s
= Rp1 (e ) e
(Ns )
ik
i
= R2 (Nes k ) e
ik s
ik s
U1i H1? R2 U2i H2? R?
i
(i )(r
1
e k e k
Ns
U1i H1? R1 +U2i H2? R2 ik
ik s
e
e
Ns
D1k;r =
(
)
(
)
m
2
1)
;
[D2 ]k;s =
2
Thus a subgradient of the functional
2 x) 6= 0 is
k (~
Re
where k is an integer between 1 and Ns , r is an integer
between 1 and m +1, and s is an integer between 1 and
m. Thus the Euclidean vector convex optimization
2
eT
k (b
jeTk (b2
A2 x
~)
j
A2 x
~)
A2T e
at
k
x
= x~,
!
(35)
k
On the other hand, if k2 (~x) = 0 then x~ is a global
minimum for kj and 0 is a subgradient. Thus the subinf Re D1 D2 xy
+ 2 : xy 2 R2m+1 ;
gradient set at x~ for the functional obtained by taking
!
the maximum of kj over all integers k between 1 and
1
1
x
x
2
2
max B B
Ns (32)
Ns ; C C
Ns contains the element
y
y
l
l
!
2 A2 x~)
eT
is a lower bound for o for the case N = 1, L = 2.
ko (b
2
T
Re T 2 2 A eko
(36)
jeko (b A x~)j
1
2
2
4 The Ellipsoid Algorithm
Both optimizations of the type (23) and (32) are standard applications of convex programming techniques
[7]. We will outline here the Ellipsoid algorithm whose
implementation is discussed in details in [7]. The only
non standard feature of (23) is that the convex set
Sm has no simple explicit description in the coeÆcient
space Rm+1 . One way to deal with this diÆculty is
to take the starting ellipsoid in the algorithm to contain Sm and then treating (23) as an unconstrained
optimization. This is justiable as long as the nal x
obtained for (23) lies in the set Sm , which can be tested
upon termination of the algorithm. We have found it to
hold in the numerical implementation that have been
tried. If the nal does not lie in Sm then an alternative to the development here should be used in which
the set Sm is replaced by an Euclidean sphere, and new
derivative bounds established for Q. This may result
in larger upper bounds for Q, and necessitate a ner
partition of the unit circle.
The implementation of the Ellipsoid algorithm requires
the computation of subgradients of the objective convex
functionals (23) and (32), and of the constraint functionals for the dual problem (30).
Let us now summarize the derivation of the appropriate subgradients. In order to nd the subgradient associated with the functional represented by the unconstrained for (23), that is
1
m
1
2
2
2
= inf
k p (b A x)k N + kb A xk N
1
2
x Sm
Ns
l2 s
1
l s
m+1
dened by
k be the convex functional on R
2
T 2
2
i T 2
2
A x)j = max Re e ek (b
A x) (33)
k := jek (b
2[0;2 )
let
2
2 ~ (where 2 (~
k at x
k x) 6= 0) con-
The subgradient set for
tains the subgradient of the convex functional dened
by
!
2 A2 x~)
eT
(
b
T
2
2
k
x ! Re
(34)
jeTk (b2 A2 x~)j ek (b A x)
where ko2 := arg max1kNs jeTk (b2 A2 x)j. This gives
us the subgradient associated to kb2 A2 xklN1s . For the
subgradient associated to kb1 Aj xklNs , it suÆces to
dierentiate w.r.t. x this yields
Re (A1T b1 ) + (A?1 A1 )T x~
(37)
The subgradient at corresponding to expression (23) is
then
2
(
jeTko (b2
Re (
!
~) 2T
A2 x
~)j A eko2 (38)
2
eT
ko b
) + (A?1 A1 )T x~ + Re
1T
A b1
A2 x
Dene a convex functional on R2m+1 by
T
1 D2 ]z (z ) := Re EN
s [D
The subgradient set for contains the element
Re
D1T
D2T
(39)
(40)
ENs
Thus (40) gives a subgradient for the functional dened
by (31).
In order to obtain a subgradient for the constraint functional (30), consider its rst term
f1 (z ) := B 1 B 2 z lNs
(41)
2
Then the subgradient of (41) at z , is obtained by differentiating w.r.t. z
:= Re [B 1 B 2 ]? [B 1 B 2 ]
For the second term
f2 (z ) = C 1 C 2 z lNs
1
T
(42)
z
(43)
1
dene the convex functional k on R2m+1 ,
1 C 2 ]z = max Re ei eT [C 1
k := eT
k [C
k
2[0;2 )
C 2 ]z
(44)
The subgradient set for k at z 2 R2m+1 which satises
k (z ) 6= 0 contains the vector
gk
:= Re
1 C 2 ]z
eT
k [C
eT [C 1 C 2 ]z k
C 1?
C 2?
!
ek
(45)
Thus a subgradient of the convex functional (43) is
2
:=
Ns
X
k
=1
gk
(46)
Therefore a subgradient for the total convex functional
(31) is
n := arg max [fj (z )]
(47)
j =1;2
:= n
(48)
5 A Numerical Example
In this section the method developed in this paper is
applied to the following plant
1:9 :1s
P =
(49)
(1 + s)(1:9 + :1s)
with weights
3
1
:8 + :2s
H1 = :17
; H2 = 0
1+s
2
3
:1 + 2:1s
1
:8 + :2s
; U2o = :22
U1o = :17
1+s
1+s
The mixed objective for this example is given by
kH1 U1o P Qk2 + kU2o P Qk1
where the rst term represents performance, while the
second term represents robust stability.
The lower bound obtained is .41, and is generated by a
convex program for the nite variable problem (m = 6)
resulting from the inmum in (32) for an accuracy =
:01. The upper bound obtained is .44, and is produced
by a convex program for the nite variable problem
(m = 6) resulting from (23) for an accuracy = :01 .
These results are generated by an Ellipsoid algorithm
for the nite variable approximations to the primary
and dual problems. The optimal 6-th order polynomial
Q obtained is
Q( z ) =
:0162 :0256z :0243z 2 :0192z 3
:0153z 4 :0090z 5 :0051z 6
The Ellipsoid algorithm has the advantage of simplicity
and polynomial execution time. However, better results
can be achieved by other faster interior point methods
such as the method of analytic centers.
References
[1] S.M. Djouadi, C.D. Charalambous, and D.W.
Repperger. On Multiobjective H2 /H1 Optimal Control. Proceedings of the American Control Conference,
4:4091{4096, June, 2001.
[2] A. Ghulchak. Dual problem in Multi-Objective
Hp Control. Proceedings of the European Control Conference, Porto, Portugal, September, 2001.
[3] S.P. Boyd and C.H. Baratt. Linear Controller
Design Limits of Performance. Prentice Hall, 1991.
[4] H. Hindi, B. Hassibi, and S.P. Boyd. Multiobjective H2 /H1 optimal control via nite dimensional
Q-parametrization and linear matrix inequalities. Proceedings of the ACC, 5:3244{3249, 1998.
[5] C.W. Scherer. A linear matrix multiobjective
H2 /H1 control. IEEE Transactions on Automatic Control, 40(6):1054{1062, 1995.
[6] N.Z. Shor. Minimization Techniques for NonDierentiable Functions. Springer-Verlag, 1985.
[7] S.P. Boyd and C. Baratt. Linear Control System
Design: Limits of Performance. Prentice Hall, New
Jersey, 1991.
[8] P.P. Khargonekar and M.A. Rotea. Mixed
H2 /H1 control: A convex optimization approach.
IEEE Transactions on Automatic Control, 36(7):824{
837, 1991.
[9] S.P. Boyd, V. Balakrishnan, C.H. Barratt, N.M.
Khraishi, X X. Li, D.G. Meyer, and S.A. Norman. A
new CAD method and associated architectures for linear controllers. IEEE Transactions on Automatic Control, 33:268{283, 3 1988.
[10] M. Sznaier and H. Rotstein. An exact solution
to general 4-blocks discrete-time mixed H2 /H1 problems via convex optimization. Proceedings of the ACC,
2:2251{2256, 1994.
[11] C.W. Scherer, P. Gahinet, and M. Chilali. Multiobjective output feedback control via LMI optimization. TAC, 42(7):896{911, 1997.
[12] C.W. Scherer. From mixed to multiobjective control. Proceeding of IEEE Conference on Decision and
Control, pages 3621{3626, 1999.
[13] X. Chen and J. Wen. A linear matrix inequality approach to the general H2 /H1 control problem.
Proceedings of the ACC, pages 3883{3888, 1995.
[14] S.P. Boyd, L. ElGhaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in Systems and
Control Theory. SIAM, 1994.
[15] H. Helson. Lectures on Invariant Subspaces. Academic Press, New York and London, 1964.
[16] N.I. Achieser. Theory of Approximation. Dover,
New York, 1992.
Download