Institute for Advanced Management Systems Research Department of Information Technologies ˚

advertisement
Institute for Advanced Management Systems Research
Department of Information Technologies
Åbo Akademi University
Fuzzy Approaches to Multiple Objective
Programming - Tutorial
Robert Fullér
Directory
• Table of Contents
• Begin Article
c 2010
October 1, 2010
rfuller@abo.fi
Table of Contents
1. Multiple objective programs
2. Application functions
3. A simple bi-objective problem
4. The efficiency of compromise solutions
3
1. Multiple objective programs
Consider a multiple objective program
max f1 (x), . . . , fk (x)
x∈X
where
• fi : Rn → R, i = 1, . . . , k are objective functions,
• Rk is the criterion space,
• x ∈ Rn is the decision variable,
• Rn is the decision space, X ⊂ Rn is called the set of
feasible alternatives.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 1: Multiple objective programs
4
The image of X in Rk , denoted by ZX , i.e. the set of feasible
outcomes is defined as
ZX = {z ∈ Rk |zi = fi (x), i = 1, . . . , k, x ∈ X}.
Definition 1.1. An x∗ ∈ X is said to be efficient (or nondominated or Pareto-optimal) for the MOP iff there exists no y ∈ X
such that
fi (y) ≥ fi (x∗ )
for all i with strict inequality for at least one i. The set of all
Pareto-optimal solutions will be denoted by X ∗ .
In other words, a nondominated point is such that any other
point in ZX which increases the value of one criterion also decreases the value of at least one other criterion.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 1: Multiple objective programs
5
Example 1.1. Suppose that we are given a three-objective decison problem
max f1 (x), f2 (x), f3 (x) ,
x∈X
where X = {u, y, z} is a finite set and let
{f1 (u), f2 (u), f3 (u)} = (1, 3, 3)
{f1 (y), f2 (y), f3 (y)} = (2, 2, 3)
{f1 (z), f2 (z), f3 (z)} = (1, 2, 2)
be the set of feasible outcomes, i.e.
ZX = {(1, 3, 3), (2, 2, 3), (1, 2, 2)}
Then u and y are the efficient solutions.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 1: Multiple objective programs
6
A
B
ZX
C
Segments
AB and
BC
Pareto-optimal
Figure 1: Segments
AB and
BC are
are Pareto-optimal.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 1: Multiple objective programs
7
The simplest bi-objective programming problem is,
max{x, 1 − x}; subject to x ∈ [0, 1].
1
0 .5
1
0 .5
Figure 2: A simple two-objective problem.
Here we have X ∗ = [0, 1], i.e. each feaToc
JJ
II
J
I
Back
J Doc
Doc I
Section 1: Multiple objective programs
8
∗ = [0, 1], i.e. each feaHere
Here we
havewe
X ∗have
= [0,X
1], i.e.
each feasible alternative is an
sible
alternative is an efficient solution.
efficient
solution.
(1,1)
f_2(x)=1-x
f_1(x)+f_2(x)=1
f_1(x)=x
6
Figure 3: The set of Pareto-optimal
solution is [0,1].
A natural way to obtain initial information about the decision
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 1: Multiple objective programs
9
problem is to optimize each criterion separately. Let xi be a
solution to
fi (xi ) = max{fi (x)|x ∈ X},
and let
Mi = fi (xi ),
be the optimal value of the i-th individual objective function
over X.
The vector
M = (M1 , M2 , . . . , Mk )
is called the ideal point.
The ideal point for the bi-objective problem is (1, 1) (which is
not attainable by any point from the decision set [0, 1]).
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 1: Multiple objective programs
10
The payoff matrix of an k-objective problem is defined as


M1
f1 (x2 ) . . . f1 (xk )


 f2 (x1 ) M2
. . . f2 (xk ) 




.
.
.


..
..
..


1
2
fk (x ) fk (x ) . . . Mk
Here we have used the notation Mi = fi (xi ). The payoff matrix
for the bi-objective problem is,
1 0
0 1
Toc
JJ
II
J
!
I
Back
J Doc
Doc I
Section 1: Multiple objective programs
11
Let mi denote the value
min{fi (x)|x ∈ X}
i.e. mi is the worst possible value for the i-th objective.
It is clear that the inequalities
hold for each x in X.
mi ≤ fi (x) ≤ Mi ,
In the case of the bi-objective problem we have
0 ≤ f1 (x), f2 (x) ≤ 1,
for all x from X = [0, 1].
Toc
JJ
II
J
I
Back
J Doc
Doc I
12
2. Application functions
An application function hi for the MOP
max f1 (x), . . . , fk (x) ,
x∈X
is defined as hi : R → [0, 1], where hi (t) measures the degree
of fulfillment of the decision maker’s requirements about the
i-th objective by the value t. Suppose that the decision maker
has some preference parameters, for example
• reference points which represents desirable levels on each
criterion
• reservation levels which represent minimal requirements
on each criterion
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 2: Application functions
13
If the value of an objective function (at the current point) exceeds his desirable level on this objective then he is totally satisfied with this alternative.
If, however, he value of an objective function (at the current
point) is below of his reservation level on this objective then he
is absolutely not satisfied with this alternative.
Consider again the bi-objective problem
max{x, 1 − x}; subject to x ∈ [0, 1].
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 2: Application functions
14
For example, we can introduce the following linear application
functions

1
if t ≥ 0.8


0.8 − t
h1 (t) = h2 (t) =
1
−
if 0.4 ≤ t ≤ 0.8

0.4

0
if t ≤ 0.4
where t denotes the value attained by the objective functions.
That is,

1
if f1 (x) ≥ 0.8


0.8 − f1 (x)
h1 (f1 (x)) =
1−
if 0.4 ≤ f1 (x) ≤ 0.8

0.4

0
if f1 (x) ≤ 0.4
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 2: Application functions
15
Introducing the notation H1 (x) = h1 (f1 (x) we get

1
if f x ≥ 0.8


0.8 − x
H1 (x) =
1−
if 0.4 ≤ x ≤ 0.8

0.4

0
if x ≤ 0.4
As for the application function for the second objective function
we find

1
if f2 (x) ≥ 0.8


0.8 − f2 (x)
h2 (f2 (x)) =
if 0.4 ≤ f2 (x) ≤ 0.8
1−

0.4

0
if f2 (x) ≤ 0.4
Introducing the notation
H2 (x) = h2 (f2 (x)) = h2 (1 − x)
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 2: Application functions
16
we get

1
if 1 − x ≥ 0.8


0.8 − (1 − x)
H2 (x) =
1−
if 0.4 ≤ 1 − x ≤ 0.8

0.4

0
if 1 − x ≤ 0.4
That is,

1
if x ≤ 0.2


x − 0.2
H2 (x) =
if 0.2 ≤ x ≤ 0.6
1−

0.4

0
if x ≥ 0.6
These application functions can be interpreted as:
If one can find a feasible alternative where the values of both
objectives exceeds 0.8 then the decision maker is completely
satisfied with this solution.
Toc
JJ
II
J
I
Back
J Doc
Doc I
0
if x ≥ 0.6
Section 2: Application functions
H2(x) = h2(1-x)
17
H1(x) = h1(x)
1
0.4
0.6
x
Figure 4: A simple two-objective problem.
These application functions can be interOn the other hand, alternatives for which the values of both obpreted as:
jectives are less than 0.4 are not qualified candidates for ’good
solutions’.
18
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 2: Application functions
18
In other words, with the notation of
Hi (x) := hi (fi (x)),
the value of Hi (x) may be considered as the degree
of membership of x in the fuzzy set ’good solutions’
for the i-th objective.
A generally used application function is the following
Hi (x) = hi (fi (x))
Mi − fi (x)
=1−
,
Mi − mi
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 2: Application functions
19
where Mi denotes the independent maximum and mi
stands for the independent minimum of the i-th objective function over X.
Hi
1
mi
Mi
fi(x)
A simple linear application function.
It is Figure
clear5:that
if for some alternative x∗,
Toc
JJ
II
J ∗ ) =IM , Back
f (x
J Doc
Doc I
Section 2: Application functions
20
It is clear that if for some alternative x∗ ,
fi (x∗ ) = Mi ,
(an ideal solution for the i-th objective) then
Hi (x∗ ) = 1,
since
Mi − fi (x∗ )
Hi (x ) = 1 −
Mi − mi
Mi − Mi
=1−
Mi − mi
= 1.
∗
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 2: Application functions
21
and if for some alternative x∗ ,
fi (x∗ ) = mi
(an anti-ideal solution for the i-th objective) then Hi (x∗ )
0. The bigger the value of the objective the bigger the
satisfaction of the decision maker.
Then a ’good compromise solution’ to MOP may
be defined as an x ∈ X being ’as good as possible’
for the whole set of objectives.
Taking into consideration the nature of Hi , it is quite
reasonable to look for such a kind of solution by
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 2: Application functions
22
means of the following auxiliary problem
max H1 (x), . . . , Hk (x) .
x∈X
For max H1 (x), . . . , Hk (x) may be interpreted as
a synthetical notation of a conjunction statement maximize jointly all objectives, and Hi (x) ∈ [0, 1], it is
reasonable to use a t-norm T to represent the connective AND. In this way
max H1 (x), . . . , Hk (x)
x∈X
turns into the single-objective problem
max T (H1 (x), . . . , Hk (x)).
x∈X
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 2: Application functions
23
There exist several ways to introduce application functions.
Usually, the authors consider increasing membership
functions of the form

if t ≥ Ri
1
hi (t) =
v (t) if ri ≤ t ≤ Ri
 i
0
if t ≤ ri
where
ri ≥ mi = min{fi (x)|x ∈ X}
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 2: Application functions
24
denotes the reservation level representing minimal
requirement and
Ri ≤ Mi = max{fi (x)|x ∈ X}
denotes the desirable level (or reference level) on
the i-th objective.
Consider again the bi-objective problem
max{x, 1 − x}; subject to x ∈ [0, 1].
Toc
JJ
II
J
I
Back
J Doc
Doc I
tive.
Section 2: Application functions
25
Hi
1
mi
ri
Ri
Mi
fi(x)
Linear
Figuremembership
6: Linear membershipfunction.
function.
Using the general linear application functions
27
1−x
H1 (x) = h1 (f1 (x)) = 1 −
= x,
1
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 2: Application functions
26
1 − (1 − x)
= 1 − x,
1
and choosing the minimum-norm to aggregate the
values of objective functions, the resulting single objective problem becomes,
H2 (x) = h2 (f2 (x)) = 1 −
max min{x, 1 − x}; subject to x ∈ [0, 1].
has a unique solution x∗ = 1/2 and the optimal values of the objective functions are (0.5, 0.5).
If, however, we used the Łukasiewicz t-norm,
TL (a, b) = max{a + b − 1, 0},
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 2: Application functions
27
1
0 .5
1
0 .5
Here we have X ∗ = [0, 1], i.e. each feafor criteria
aggregation
solution set
of single
sible
alternative
isthen
an the
efficient
solution.
Figure 7: The optimal value is (1/2, 1/2).
objective problem
Toc
JJ
II
J
(1,1)
I
Back
J Doc
Doc I
Section 2: Application functions
28
max max{x + 1 − x − 1, 0}, subject to x ∈ [0, 1],
is X ∗ = [0, 1].
Alternatively, we might use a non-symmetric linear
application functions

1
if t ≥ 0.8



0.8 − t
h1 (t) =
if 0.4 ≤ t ≤ 0.8
1
−

0.4


0
if t ≤ 0.4
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 2: Application functions
h2 (t) =

1



1−



0
29
if t ≥ 0.8
0.8 − t
if 0.2 ≤ t ≤ 0.8
0.6
if t ≤ 0.2
where t denotes the value attained by the objective
functions.
Choosing the minimum norm for aggregation, the resulting problem
max min{H1 (x), H2 (x)};
subject to x ∈ [0, 1]
Toc
JJ
II
J
I
Back
J Doc
Doc I
0.4

0
if t ≤ 0.4
Section 2: Application functions
H2(x) = h2(1-x)
30
H1(x) = h1(x)
1
0.4
0.2
0.4
0.56
0.8
x
Figure 8: The optimal value is (0.4, 0.4).
has a unique solution x∗ = 0.56 and the optimal values of the objectives are,
(f1 (x∗ ), f2 (x∗31) = (0.4, 0.4).
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: A simple bi-objective problem
31
3. A simple bi-objective problem
Consider the following simple bi-objective problem
{x1 + x2 , x1 − x2 } → max
subject to
0 ≤ x1 , x2 ≤ 1.
Its Pareto optimal solutions are
X ∗ = {(1, x2 ), x2 ∈ [0, 1]}.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: A simple bi-objective problem
32
Figure 9: The decision space.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: A simple bi-objective problem
33
Figure 10: The criterion space.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: A simple bi-objective problem
34
Figure 11: Explanation of the image of the decision space.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: A simple bi-objective problem
35
Let
r1 = m1
= min{f1 (x) = x1 + x2 | 0 ≤ x1 , x2 ≤ 1}
=0
and
r2 = m2
= min{f2 (x) = x1 − x2 | 0 ≤ x1 , x2 ≤ 1}
= −1
be the reservation levels and let
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: A simple bi-objective problem
36
R1 = M1
= max{f1 (x) = x1 + x2 | 0 ≤ x1 , x2 ≤ 1}
=2
and
R2 = M2
= max{f2 (x) = x1 − x2 | 0 ≤ x1 , x2 ≤ 1}
=1
be the reference points for the first and the second
objectives, respectively,
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: A simple bi-objective problem
37
{x1 + x2 , x1 − x2 } → max
subject to
0 ≤ x1 , x2 ≤ 1.
Then we can build the following application functions
h1 (f1 (x)) = h1 (x1 + x2 )
2 − (x1 + x2 )
2
x1 + x2
,
=
2
=1−
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: A simple bi-objective problem
38
Figure 12: Pareto optimal solutions (1, x2 ), x2 ∈ [0, 1].
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: A simple bi-objective problem
39
h2 (f2 (x)) = h2 (x1 − x2 )
1 − (x1 − x2 )
2
1 + x1 − x2
=
.
2
=1−
Let us suppose the decision maker chooses the minimum operator to represent his evaluation of the
connective and in the problem of: maximize the first
objective and maximize the second objective.
Then the original bi-objective problem turns into the
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: A simple bi-objective problem
40
single-objective LP,
x1 + x2 1 + x1 − x2
max min
,
2
2
subject to
0 ≤ x1 , x2 ≤ 1.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: A simple bi-objective problem
41
That is,
max λ
x1 + x2
≥λ
2
1 + x1 − x2
≥λ
2
subject to
0 ≤ x1 , x2 ≤ 1,
Its unique optimal solution is x∗1 = 1, and x∗2 = 1,
furthermore
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: A simple bi-objective problem
42
(f1 (1, 1), f2 (1, 1)) = (1 + 1, 1 − 1) = (2, 0),
is a Pareto-optimal solution to the original bi-objective
problem.
Suppose the decision maker chooses the Łukasiewicz
t-norm, TL (a, b) = max{a + b − 1, 0}, to represent
his evaluation of the connective and.
Then the original bi-objective problem turns into the
single-objective LP,
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: A simple bi-objective problem
43
That is,
x1 + x2 1 + x1 − x2
,
max TL
2
2
subject to , 0 ≤ x1 , x2 ≤ 1.
max max{x1 − 1/2, 0},
subject to 0 ≤ x1 , x2 ≤ 1,
Its optimal solution-set is,
{(1, x2 ), x2 ∈ [0, 1]},
which is exactly, X ∗ , the set of Pareto optimal solutions to the original problem.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: A simple bi-objective problem
44
Figure 13: Pareto optimal solutions (1, x2 ), x2 ∈ [0, 1].
Toc
JJ
II
J
I
Back
J Doc
Doc I
45
4. The efficiency of compromise solutions
One of the most important questions is the efficiency
of the obtained compromise solutions.
Theorem 4.1. Let x∗ be an optimal solution to
max T (H1 (x), . . . , Hk (x))
x∈X
where T is a t-norm, Hi (x) = hi (fi (x)), hi is an
increasing application function, i = 1, . . . , k.
If hi is strictly increasing on the interval [ri , Ri ] for
i = 1, . . . , k. Then x∗ is efficient for the problem
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 4: The efficiency of compromise solutions
46
max f1 (x), . . . , fk (x)
x∈X
if either
(i) x∗ is unique;
(ii) T is strict and Hi (x∗ ) = hi (fi (x∗ )) ∈ (0, 1) for
i = 1, . . . , k.
Proof. (i) Suppose that x∗ is not efficient. If x∗ were
dominated, then x∗∗ ∈ X such that
fi (x∗ ) ≤ fi (x∗∗ )
for all i and with a strict inequality for at least one i.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 4: The efficiency of compromise solutions
47
Consequently, from the monotonicity of T and hi we
get
T (H1 (x∗ ), . . . , Hk (x∗ )) ≤ T (H1 (x∗∗ ), . . . , Hk (x∗∗ ))
which means that x∗∗ is also an optimal solution to
the auxiliary problem. So x∗ is not unique.
(ii) Suppose that x∗ is not efficient. If x∗ were dominated, then x∗∗ ∈ X such that
fi (x∗ ) ≤ fi (x∗∗ )
for all i and with a strict inequality for at least one i.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 4: The efficiency of compromise solutions
48
Taking into consideration that
Hi (x∗ ) = hi (fi (x∗ )) ∈ (0, 1)
for all i and T is strict, and hi is monoton increasing
we get
T (H1 (x∗ ), . . . , Hk (x∗ )) < T (H1 (x∗∗ ), . . . , Hk (x∗∗ ))
which means that x∗ is not an optimal solution to the
auxiliary problem. So x∗ is not efficient.
If we use linear application functions then they are
strictly increasing on [ri , Ri ], and, therefore any optimal solution x∗ to the auxiliary problem is an efficient solution to the original MOP problem if either
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 4: The efficiency of compromise solutions
49
(i) x∗ is unique;
(ii) T is strict and Hi (x∗ ) ∈ (0, 1), i = 1, . . . k.
Consider the following linear bi-objective programming problem
max{2x1 + x2 , −x1 − 2x2 }
subject to
x1 + x2 ≤ 4,
3x1 + x2 ≥ 6,
x1 , x2 ≥ 0.
Toc
JJ
II
J
I
Back
J Doc
Doc I
x1, x2 ≥ 0.
Section 4: The efficiency of compromise solutions
50
o_1
3
o_2
(4, 0)
(2, 0)
2
4
x_1
40
Figure 14: The bi-objective problem.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 4: The efficiency of compromise solutions
51
The first objective
2x1 + x2 ,
attains its maximum at point (4, 0),
M1 = 2x1 + x2 = 8,
whereas the second one
−x1 − 2x2 ,
has its maximum at point (2, 0),
M2 = −x1 − 2x2 = −2,
It is easy to see that the efficient solutions lie on the
segment
Toc
JJ
II
J
I
Back
J Doc
Doc I
x1, x2 ≥ 0.
Section 4: The efficiency of compromise solutions
52
o_1
3
o_2
(4, 0)
(2, 0)
2
4
x_1
40
Figure 15: The bi-objective problem.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 4: The efficiency of compromise solutions
53
{(4 − 2γ, 0), γ ∈ [0, 1]}.
Let
r1 = 4,
r2 = −5
be the reservation levels and let
R1 = 7,
R2 = −3
be the reference points for the firts and the second
objectives, respectively.
Then we can build the following application funcToc
JJ
II
J
I
Back
J Doc
Doc I
Section 4: The efficiency of compromise solutions
tions
h1 (t) =

1



54
if t ≥ 7
1−



0
7−t
if 4 ≤ t ≤ 7
3
if t ≤ 4

1
if t ≥ −3



−3−t
h2 (t) =
1
−
if −5 ≤ t ≤ −3

2


0
if t ≤ −5
Let us suppose the decision maker chooses the minimum operator to represent his evaluation of the conToc
JJ
II
J
I
Back
J Doc
Doc I
Section 4: The efficiency of compromise solutions
55
nective and.
Then the original bi-objective problem turns into
max min{h1 (2x1 + x2 ), h2 (−x1 − 2x2 )}
subject to
x1 + x2 ≤ 4,
3x1 + x2 ≥ 6,
x1 , x2 ≥ 0.
That is
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 4: The efficiency of compromise solutions
56
max λ
subject to h1 (2x1 + x2 ) ≥ λ
h2 (−x1 − 2x2 ) ≥ λ
x1 + x2 ≤ 4,
3x1 + x2 ≥ 6,
x1 , x2 ≥ 0.
That is
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 4: The efficiency of compromise solutions
57
max λ
7 − (2x1 + x2 )
≥λ
3
− 3 − (−x1 − 2x2 )
≥λ
1−
2
x1 + x2 ≤ 4,
3x1 + x2 ≥ 6,
x1 , x2 ≥ 0.
subject to 1 −
where its unique optimal solution is,
x∗ = (23/7, 0).
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 4: The efficiency of compromise solutions
58
is its optimal solution which is also efficient because
it lies in the segment
{(4 − 2γ, 0), γ ∈ [0, 1]}.
The optimal values of the objective functions are
f1 (x∗ ) = 46/7
and
f2 (x∗ ) − 23/7.
This fact agrees with the theorem because x∗ is the
only optimal solution.
Toc
JJ
II
J
I
Back
J Doc
Doc I
x1, x2 ≥ 0.
Section 4: The efficiency of compromise solutions
59
o_1
3
o_2
(4, 0)
(2, 0)
2
4
x_1
40
Figure 16: The bi-objective problem.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Download