Institute for Advanced Management Systems Research Department of Information Technologies

advertisement
Institute for Advanced Management Systems Research
Department of Information Technologies
Faculty of Technology, Åbo Akademi University
The Past is Crisp, but the Future is
Fuzzy - Tutorial
Robert Fullér
Directory
• Table of Contents
• Begin Article
c 2010
April 25, 2010
rfuller@abo.fi
Table of Contents
1. Triangular and trapezoidal fuzzy numbers
2. Material implication
3. Fuzzy implications
4. The theory of approximate reasoning
5. Crisp and fuzzy relations
6. Simplified fuzzy reasoning schemes
7. Tsukamoto’s and Sugeno’s fuzzy reasoning scheme
Table of Contents (cont.)
3
8. Fuzzy programming versus goal programming
9. Multiple Objective Programs
10. Application functions for MOP problems
11. The efficiency of compromise solutions
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 1: Triangular and trapezoidal fuzzy numbers
4
1. Triangular and trapezoidal fuzzy numbers
A fuzzy set A of the real line R is defined by its membership function (denoted also by A) A : R → [0, 1]. If x ∈ R then A(x) is interpreted as the
degree of membership of x in A.
Figure 1: Possible membership functions for monthly ”small salary” and
”big salary”.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 1: Triangular and trapezoidal fuzzy numbers
5
If x0 is the amount of the salary then x0 belongs to fuzzy set
A1 = small
with degree of membership
A1 (x0 ) =
and to


1−
 0
x0 − 2000
4000
if 2000 ≤ x0 ≤ 6000
otherwise
A2 = big
with degree of membership


1


Toc
6000 − x0
A2 (x0 ) =
1−

4000

 0
JJ
II
J
if x0 ≥ 6000
if 2000 ≤ x0 ≤ 6000
otherwise
I
Back
J Doc
Doc I
Section 1: Triangular and trapezoidal fuzzy numbers
6
Definition 1.1. A fuzzy set A is called triangular fuzzy number with peak
(or center) a, left width α > 0 and right width β > 0 if its membership
function has the following form

a−t



if a − α ≤ t ≤ a
1−


α

t−a
A(t) =
1−
if a ≤ t ≤ a + β



β



0
otherwise
and we use the notation A = (a, α, β). The support of A is (a − α, b + β).
A triangular fuzzy number with center a may be seen as a fuzzy quantity
”x is close to a” or x is approximately equal to a”.
Definition 1.2. A fuzzy set A is called trapezoidal fuzzy number with tolerance interval [a, b], left width α and right width β if its membership function
Toc
JJ
II
J
I
Back
J Doc
Doc I
If A is not a fuzzy number then there exists an γ ∈ [0, 1] such that [A]γ
is not
a convex and
subset
of R. fuzzy numbers
Section
1: Triangular
trapezoidal
1
a-!
a
a+"
Fig. 1.2. Triangular fuzzy number.
Figure 2: A triangular fuzzy number.
Definition 1.1.4 A fuzzy set A is called triangular fuzzy number with peak
a, left width α > 0 and right width β > 0 if its membership
has (or
the center)
following
form
function has the following
 form

a−t





1 1−− a − t if aif−aα−≤αt ≤≤at ≤ a




αα




t − a if a ≤ t ≤ b
A(t)
= 1
1−
if a ≤ t ≤ a + β


A(t) =

β



t−b


otherwise

if a ≤ t ≤ b + β
 1 0−



β
and we use the notation
A = (a, α, β). It can easily be verified that

0
otherwise
[A]γ = [a − (1 − γ)α, a + (1 − γ)β], ∀γ ∈ [0, 1].
The
of A is (a
− α, b + β).
numberJwith
JJ
II
J A triangular
I fuzzy
Doccenter
Doca I
Toc support
Back
7
Section 1: Triangular and trapezoidal fuzzy numbers
1.1 Fuzzy sets
5
and we use the notation A = (a, b, α, β).
and we use the notation
A=
b, α, β).
A trapezoidal fuzzy number may
be(a,seen
as a fuzzy quantity
(1.1)
It can easily be shown that
[A]γ = [a − (1 − γ)α, b + (1 − γ)β], ∀γ ∈ [0, 1].
”x is approximately in the interval [a, b]”.
The support of A is (a − α, b + β).
1
a-!
a
b
b+"
Fig. 1.3. Trapezoidal fuzzy number.
Figure 3: Trapezoidal fuzzy number.
A trapezoidal fuzzy number may be seen as a fuzzy quantity
”x is approximately in the interval [a, b]”.
Definition 1.1.6 Any fuzzy number A ∈ F can be described as

&
JJ
II  %J
I
J Doc
Toc
Back
a−t
Doc I
8
Section 2: Material implication
9
2. Material implication
Let p =0 x is in A0 and q =0 y is in B 0 are crisp propositions, where A and
B are crisp sets for the moment.
The full interpretation of the material implication p → q is that: the degree
of truth of p → q quantifies to what extend q is at least as true as p, i.e.
1 if τ (p) ≤ τ (q)
τ (p → q) =
0 otherwise
τ (p)
τ (q)
1
0
0
1
1
1
0
0
τ (p → q)
1
1
1
0
Table 1: Truth table for the material implication.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: Fuzzy implications
10
3. Fuzzy implications
Consider the implication statement
if pressure is high then volume is small
Figure 4: Membership function for ”big pressure”.
The membership function of the fuzzy set A, big pressure, can be interpreted
as
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: Fuzzy implications
11
• 1 is in the fuzzy set big pressure with grade of membership 0
• 4 is in the fuzzy set big pressure with grade of membership 0.75
• x is in the fuzzy set big pressure with grade of membership 1, x ≥ 5
A(u) =

1



1−



0
if u ≥ 5
5−u
if 1 ≤ u ≤ 5
4
otherwise
The membership function of the fuzzy set B, small volume, can be interpreted as

1



Toc
if v ≤ 1
v−1
B(v) =
1−
if 1 ≤ v ≤ 5

4


0
otherwise
JJ
II
J
I
J Doc
Back
Doc I
Section 3: Fuzzy implications
12
Figure 5: Membership function for ”small volume”.
• 5 is in the fuzzy set small volume with grade of membership 0
• 2 is in the fuzzy set small volume with grade of membership 0.75
• x is in the fuzzy set small volume with grade of membership 1, x ≤ 1
If p is a proposition of the form ’x is A’ where A is a fuzzy set, for example,
big pressure and q is a proposition of the form ’y is B’ for example, small
volume then we define the implication p → q as
Toc
JJ
II
A(x) → B(y)
J
I
Back
J Doc
Doc I
Section 3: Fuzzy implications
13
For example,
x is big pressure → y is small volume ≡ A(x) → B(y)
Remembering the full interpretation of the material implication
1 if τ (p) ≤ τ (q)
p→q=
0 otherwise
We can use the definition
A(x) → B(y) =
1
0
if A(x) ≤ B(y)
otherwise
4 is big pressure → 1 is small volume = 0.75 → 1 = 1
The most often used fuzzy implication operators are listed in the following
table.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 3: Fuzzy implications
14
Name
Definition
Early Zadeh
Łukasiewicz
Mamdani
Larsen
x→y
x→y
x→y
x→y
Standard Strict
x→y
Gödel
x→y
Gaines
x→y
Kleene-Dienes
Kleene-Dienes-Łukasiewicz
Yager
x→y
x→y
x→y
= max{1 − x, min(x, y)}
= min{1, 1 − x + y}
= min{x, y}
= xy
1 if x ≤ y
=
0 otherwise
1 if x ≤ y
=
y otherwise
1
if x ≤ y
=
y/x otherwise
= max{1 − x, y}
= 1 − x + xy
= yx
Table 2: Fuzzy implication operators.
Toc
JJ
II
J
I
Back
J Doc
Doc I
15
4. The theory of approximate reasoning
In 1979 Zadeh introduced the theory of approximate reasoning. This theory
provides a powerful framework for reasoning in the face of imprecise and
uncertain information.
Entailment rule:
Conjuction rule:
Toc
JJ
Mary is very young
very young ⊂ young
Mary is young
pressure is not very high
pressure is not very low
pressure is not very high and not very low
II
J
I
Back
J Doc
Doc I
Section 4: The theory of approximate reasoning
16
pressure is not very high
or
Disjunction rule:
pressure is not very low
pressure is not very high or not very low
Projection rule:
(x, y) is close to (3, 2)
(x, y) is close to (3, 2)
x is close to 3
y is close to 2
How to make inferences in fuzzy environment?
<1 :
if
observation
pressure is BIG then
pressure is 4
conclusion
Toc
JJ
volume is SMALL
volume is ?
II
J
I
Back
J Doc
Doc I
Section 4: The theory of approximate reasoning
17
Figure 6: BIG(4) = SMALL(2) = 0.75.
<1 :
if
observation
pressure is BIG then
pressure is 4
conclusion
Toc
JJ
volume is SMALL
volume is 2
II
J
I
Back
J Doc
Doc I
18
5. Crisp and fuzzy relations
A classical relation can be considered as a set of tuples, where a tuple is an
ordered pair. A binary tuple is denoted by (u, v), an example of a ternary
tuple is (u, v, w) and an example of n-ary tuple is (x1 , . . . , xn ).
Definition 5.1. Let X and Y be nonempty sets. A fuzzy relation R is a fuzzy
subset of X × Y . If X = Y then we say that R is a binary fuzzy relation in
X.
Let R be a binary fuzzy relation on R. Then R(u, v) is interpreted as the
degree of membership of (u, v) in R.
Example 5.1. A simple example of a binary fuzzy relation on U = {1, 2, 3},
called ”approximately equal” can be defined as
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 5: Crisp and fuzzy relations
19
R(1, 1) = R(2, 2) = R(3, 3) = 1,
R(1, 2) = R(2, 1) = R(2, 3) = R(3, 2) = 0.8,
R(1, 3) = R(3, 1) = 0.3.
The membership function of R is given by

if u = v

 1

0.8 if |u − v| = 1
R(u, v) =



0.3 if |u − v| = 2
In matrix notation it can be represented as


1 0.8 0.3
R = 0.8 1 0.8
0.3 0.8 1
Fuzzy relations are very important because they can describe interactions
between variables.
JJ
II
J
I
J Doc Doc I
Toc
Back
20
6. Simplified fuzzy reasoning schemes
Suppose that we have the following rule base
<1 :
also
<2 :
if
x is A1 then
y is z1
if
y is z2
<n :
if
x is A2 then
............
x is An then
y is zn
x is x0
fact:
y is z0
action:
where A1 , . . . , An are fuzzy sets.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 6: Simplified fuzzy reasoning schemes
21
Suppose further that our data base consists of a single fact x0 . The problem
is to derive z0 from the initial content of the data base, x0 , and from the
fuzzy rule base < = {<1 , . . . , <n }.
<1 :
if
salary is small then
loan is z1
<2 :
if
salary is big then
loan is z2
also
fact:
salary is x0
loan is z0
action:
A deterministic rule base can be formed as follows
Toc
<1 :
if
2000 ≤ s ≤ 6000 then
loan is max 1000
<3 :
if
s ≥ 6000 then
loan is max 2000
<4 : if
JJ
s ≤ 2000 then
II
J
no loan at all
J Doc
Back
I
Doc I
Section 6: Simplified fuzzy reasoning schemes
22
Figure 7: Discrete causal link between ”salary” and ”loan”.
The data base contains the actual salary, and then one of the rules is applied
to obtain the maximal loan can be obtained by the applicant.
In fuzzy logic everything is a matter of degree.
If x is the amount of the salary then x belongs to fuzzy set
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 6: Simplified fuzzy reasoning schemes
23
• A1 = small with degree of membership 0 ≤ A1 (x) ≤ 1
• A2 = big with degree of membership 0 ≤ A2 (x) ≤ 1
Figure 8: Membership functions for ”small” and ”big”.
In fuzzy rule-based systems each rule fires.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 6: Simplified fuzzy reasoning schemes
24
The degree of match of the input to a rule (wich is the firing strength) is
the membership degree of the input in the fuzzy set characterizing the antecedent part of the rule.
The overall system output is the weighted average of the individual rule
outputs, where the weight of a rule is its firing strength with respect to the
input.
To illustrate this principle we consider a very simple example mentioned
above
<1 :
if
salary is small then
loan is z1
<2 :
if
salary is big then
loan is z2
also
salary is x0
fact:
loan is z0
action:
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 6: Simplified fuzzy reasoning schemes
25
Then our reasoning system is the following
• input to the system is x0
• the firing level of the first rule is α1 = A1 (x0 )
• the firing level of the second rule is α2 = A2 (x0 )
• the overall system output is computed as the weghted average of the
individual rule outputs
α1 z1 + α2 z2
z0 =
α1 + α2
that is
A1 (x0 )z1 + A2 (x0 )z2
z0 =
A1 (x0 ) + A2 (x0 )
A1 (x0 ) =
Toc
JJ
1 − (x0 − 2000)/4000
0
II
J
I
if 2000 ≤ x0 ≤ 6000
otherwise
Back
J Doc
Doc I
Section 6: Simplified fuzzy reasoning schemes
26
Figure 9: Example of simplified fuzzy reasoning.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 6: Simplified fuzzy reasoning schemes
27

 1
1 − (6000 − x0 )/4000
A2 (x0 ) =

0
if x0 ≥ 6000
if 2000 ≤ x0 ≤ 6000
otherwise
It is easy to see that the relationship
A1 (x0 ) + A2 (x0 ) = 1
holds for all x0 ≥ 2000.
It means that our system output can be written in the form.
z0 = α1 z1 + α2 z2 = A1 (x0 )z1 + A2 (x0 )z2
that is,
6000 − x0
x0 − 2000
z1 + 1 −
z2
z0 = 1 −
4000
4000
if 2000 ≤ x0 ≤ 6000.
And z0 = 1 if x0 ≥ 6000. And z0 = 0 if x0 ≤ 2000.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 6: Simplified fuzzy reasoning schemes
28
Figure 10: Input/output function derived from fuzzy rules.
The (linear) input/oputput relationship is illustrated in figure 10.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 7: Tsukamoto’s and Sugeno’s fuzzy reasoning scheme
29
7. Tsukamoto’s and Sugeno’s fuzzy reasoning scheme
Tsukamoto’s reasoning scheme
<1 :
also
<2 :
if x is A1 and y is B1 then z is C1
if x is A2 and y is B2 then z is C2
fact :
x is x̄0 and y is ȳ0
cons. :
z is z0
Sugeno and Takagi use the following architecture
<1 :
if x is A1 and y is B1 then z1 = a1 x + b1 y
<2 :
if x is A2 and y is B2 then z2 = a2 x + b2 y
also
fact :
x is x̄0 and y is ȳ0
cons. :
Toc
JJ
z0
II
J
I
Back
J Doc
Doc I
Section 7: Tsukamoto’s and Sugeno’s fuzzy reasoning scheme
30
Figure 11: An illustration of Tsukamoto’s inference mechanism. The firing
level of the first rule: α1 = min{A1 (x0 ), B1 (y0 )} = min{0.7, 0.3} =
0.3, The firing level of the second rule: α2 = min{A2 (x0 ), B2 (y0 )} =
min{0.6, 0.8} = 0.6, The crisp inference: z0 = (8 × 0.3 + 4 × 0.6)/(0.3 +
0.6) = 6.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 7: Tsukamoto’s and Sugeno’s fuzzy reasoning scheme
31
Figure 12: Example of Sugeno’s inference mechanism. The overall system
output is computed as the firing-level-weighted average of the individual
rule outputs: z0 = (5 × 0.2 + 4 × 0.6)/(0.2 + 0.6) = 4.25.
Toc
JJ
II
J
I
Back
J Doc
Doc I
32
8. Fuzzy programming versus goal programming
Consider the following simple linear program
x → min,
subject to x ≥ 1, x ∈ R
What if the decision maker’s aspiration level (or goal) is
b0 = 0.5?
The goal is set outside of the conceivable values of the objective function
under given constraint.
The linear inequality system
x ≤ 0.5
x≥1
does not have any solution.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 8: Fuzzy programming versus goal programming
33
In goal programming we are searching for a solution from the decision
set, which minimizes the distance between the goal and the decision set.
That is,
|x − 0.5| → min, subject to x ≥ 1, x ∈ R
The unique solution is x∗ = 1.
In fuzzy programming we are searching for a solution that might not even
belong to the decision set, and which simultaneously minimizes the (fuzzy)
distance between the decision set and the goal.
We want to be as close as possible to the goal and to the constraints.
Depending on the definition of closeness, the fuzzy version can have the
form
max min{|x − 0.5|, |x − 1|}, subject to 1/2 ≤ x ≤ 1.
The unique solution is x∗ = 0.75.
The fuzzy problem can be stated as: Find an x ∈ R such that
{ ’x is as close as poss. to 0.5’ and ’x is as close as poss. to 1’}
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 8: Fuzzy programming versus goal programming
34
Figure 13: Illustration of the optimal solution. By using the minimum operator to aggregate the fuzzy statements ’x is close to 0.5’ and ’x is close to 1’
we get that the optimal solution is x∗ = 0.75.
In our case (see Figure 13),
|x − 0.5|
|x − 1|
max 1 −
,1 −
, subject to 1/2 ≤ x ≤ 1.
1/2
1/2
If we use the minimum operator to aggregate the objective functions then
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 8: Fuzzy programming versus goal programming
35
we have the following single-objective problem
|x − 0.5|
|x − 1|
max min 1 −
,1 −
, subject to 1/2 ≤ x ≤ 1.
1/2
1/2
Which - in this very special and simple case - can be written in the form,
max min{|x − 0.5|, |x − 1|}, subject to 1/2 ≤ x ≤ 1,
which has a unique optimal solution x∗ = 0.75.
As an example consider the following two-variable linear program
subject to
x1 + x2 → max
x1 ≤ 1
x2 ≤ 1
x1 , x2 ∈ R,
What if the decision maker’s aspiration level is b0 = 3?
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 8: Fuzzy programming versus goal programming
36
The aspiration level can not be reached since the maximal value of the objective function is equal to two?
Figure 14: A simple LP. The unique optimal solution is (1, 1) and the optimal value of the objective function is 2.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 8: Fuzzy programming versus goal programming
37
Figure 15: The desired value of the objective function is set to 3. This value,
however, is unattainable on the decision set [0, 1] × [0, 1].
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 8: Fuzzy programming versus goal programming
38
Figure 16: An illustration of the fuzzy LP. The optimal solution is outside
of the decision space [0, 1] × [0, 1]. It is - with certain fuzzy coefficients - as
close as possible to the goal and to the constraints.
Toc
JJ
II
J
I
Back
J Doc
Doc I
39
9. Multiple Objective Programs
Consider a multiple objective program (MOP)
max f1 (x), . . . , fk (x)
x∈X
where fi : R → R, i = 1, . . . , k are objective functions, Rk is the criterion
space, x ∈ Rn is the decision variable, Rn is the decision space, X ⊂ Rn
is called the set of feasible alternatives. The image of X in Rk , denoted by
ZX , i.e. the set of feasible outcomes is defined as
n
ZX = {z ∈ Rk |zi = fi (x), i = 1, . . . , k, x ∈ X}.
MOP problems may be interpreted as a synthetical notation of a conjuction
statement maximize jointly all objectives: maximize the first objective and
maximize the second objective. A ’good compromise solution’ to MOP is
defined as an x ∈ [0, 1] × [0, 1] being ’as good as possible’ for the whole
set of objectives.
Definition 9.1. An x∗ ∈ X is said to be efficient (or nondominated or
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 9: Multiple Objective Programs
40
Pareto-optimal) for the MOP iff there exists no y ∈ X such that
fi (y) ≥ fi (x∗ )
for all i with strict inequality for at least one i. The set of all Pareto-optimal
solutions will be denoted by X ∗ .
Consider the following Multiple Objective Linear Program (MLP)
{x1 + x2 , x1 − x2 } → max
subject to
x ∈ X = {x ∈ R2 | 0 ≤ x1 , x2 ≤ 1}
The Pareto optimal solutions (the north-east boundary of the image of the
decision space) to this problem are X ∗ = {(1, x2 ) | x2 ∈ [0, 1]}.
Toc
JJ
II
J
I
Back
J Doc
Doc I
41
Figure 17: The decision space is [0, 1] × [0, 1].
10. Application functions for MOP problems
An application function hi for the MOP
max f1 (x), . . . , fk (x) ,
x∈X
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 10: Application functions for MOP problems
42
Figure 18: The image of the decision space. Sometimes called the criterion
space.
is defined as hi : R → [0, 1], where hi (t) measures the degree of fulfillment
of the decision maker’s requirements about the i-th objective by the value
t. Suppose that the decision maker has some preference parameters, for
example
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 10: Application functions for MOP problems
43
Figure 19: Explanation of the image of the decision space.
• reference points which represents desirable leveles on each criterion
• reservation levels which represent minimal requirements on each criterion
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 10: Application functions for MOP problems
44
Figure 20: The problem: {x1 + x2 , x1 − x2 } → max; subject to x1 , x2 ∈
[0, 1]. The set of its Pareto optimal solutions is X ∗ = {(1, x2 ), x2 ∈ [0, 1]}.
If the value of an objective function (at the current point) exceeds his desirable level on this objective then he is totally satisfied with this alternative.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 10: Application functions for MOP problems
45
If, however, he value of an objective function (at the current point) is below
of his reservation level on this objective then he is absolutely not satisfied
with this alternative.
Let mi denote the value
min{fi (x)|x ∈ X}
i.e. mi is the worst possible value for the i-th objective and let Mi denote
the value
max{fi (x)|x ∈ X}
i.e. Mi is the largest possible value for the i-th objective on X. It is clear
that the inequalities
mi ≤ fi (x) ≤ Mi ,
hold for each x in X. The most commonly used linear application function
for the i-th objective can be defined as
hi (fi (x)) = 1 −
Toc
JJ
II
J
Mi − fi (x)
.
Mi − m i
I
Back
J Doc
Doc I
Section 10: Application functions for MOP problems
46
It is clear that hi (fi (x)) = 0 ⇐⇒ hi (fi (x)) = min{fi (x)|x ∈ X} = mi ,
and hi (fi (x)) = 1 ⇐⇒ hi (fi (x)) = max{fi (x)|x ∈ X} = Mi .
Let
r1 = m1 = min{f1 (x) = x1 + x2 | 0 ≤ x1 , x2 ≤ 1} = 0
and
r2 = m2 = min{f2 (x) = x1 − x2 | 0 ≤ x1 , x2 ≤ 1} = −1
be the reservation levels and let
R1 = M1 = max{f1 (x) = x1 + x2 | 0 ≤ x1 , x2 ≤ 1} = 2
and
R2 = M2 = max{f2 (x) = x1 − x2 | 0 ≤ x1 , x2 ≤ 1} = 1
be the reference points for the firts and the second objectives, respectively
in the MLP problem,
{x1 + x2 , x1 − x2 } → max
subject to
0 ≤ x1 , x2 ≤ 1.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 10: Application functions for MOP problems
47
Then we can build the following application functions
2 − (x1 + x2 ) x1 + x2
=
,
2
2
1 − (x1 − x2 ) 1 + x1 − x2
h2 (f2 (x)) = h2 (x1 − x2 ) = 1 −
=
.
2
2
h1 (f1 (x)) = h1 (x1 + x2 ) = 1 −
Consider now the MLP problem with k objective functions
max f1 (x), f2 (x), . . . , fk (x) .
x∈X
With the notation of
Hi (x) = hi (fi (x)),
Hi (x) may be considered as the degree of membership of x in the fuzzy set
’good solutions’ for the i-th objective.
Then a ’good compromise solution’ to MLP may be defined as an x ∈ X
being ’as good as possible’ for the whole set of objectives.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 10: Application functions for MOP problems
48
Taking into consideration the nature of Hi , it is quite reasonable to look for
such a kind of solution by means of the following auxiliary problem
max H1 (x), . . . , Hk (x) .
x∈X
For max H1 (x), . . . , Hk (x) may be interpreted as a synthetical notation
of a conjuction statement maximize jointly all objectives, and Hi (x) ∈
[0, 1], it is reasonable to use a t-norm T to represent the connective and. In
this way
max H1 (x), . . . , Hk (x)
x∈X
turns into the single-objective problem
max T (H1 (x), . . . , Hk (x)).
x∈X
Let us suppose the decision maker chooses the minimum operator to represent his evaluation of the connective and in the problem of: maximize the
first objective and maximize the second objective.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 10: Application functions for MOP problems
49
Then the original biobjective problem turns into the single-objetive LP,
x1 + x2 1 + x1 − x2
max min
,
2
2
subject to
0 ≤ x1 , x2 ≤ 1.
That is,
max λ
x1 + x2
≥λ
2
1 + x1 − x2
≥λ
2
subject to
0 ≤ x1 , x2 ≤ 1,
Then an optimal solution is x∗1 = 1, and x∗2 = 1 and (f1 (1, 1), f2 (1, 1)) =
(2, 0) is a Pareto-optimal solution to the original biobjective problem.
JJ
II
J
I
J Doc Doc I
Toc
Back
Section 10: Application functions for MOP problems
50
Consider the following linear biobjective programming problem
subject to
max{2x1 + x2 , −x1 − 2x2 }
x1 + x2 ≤ 4,
3x1 + x2 ≥ 6,
x1 , x2 ≥ 0.
The first objective
2x1 + x2 ,
attains its maximum at point (4, 0), whereas the second one
−x1 − 2x2 ,
has its maximum at point (2, 0).
The Pareto optimal solutions are {(x1 , 0), x1 ∈ [2, 4]}.
Let r1 = 4, and r2 = −5 be the reservation levels and let R1 = 7, R2 = −3
be the reference points for the firts and the second objectives, respectively
JJ
II
J
I
J Doc Doc I
Toc
Back
x1, x2 ≥ 0.
Section 10: Application functions for MOP problems
51
o_1
3
o_2
(4, 0)
(2, 0)
2
4
x_1
40
Figure 21: Illustration of the decision space and the objectives of the biobjective problem. Pareto optimal solutions are: {(x1 , 0), x1 ∈ [2, 4]}.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 10: Application functions for MOP problems
52
in the MLP problem,
subject to
max{2x1 + x2 , −x1 − 2x2 }
x1 + x2 ≤ 4,
3x1 + x2 ≥ 6,
x1 , x2 ≥ 0.
Then we can build the following application functions


1


if f1 (x) ≥ 7
7 − f1 (x)
h1 (f1 (x)) =
1−
if 4 ≤ f1 (x) ≤ 7

3

 0
if f1 (x) ≤ 4
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 10: Application functions for MOP problems
53


1


if f2 (x) ≥ −3
− 3 − f2 (x)
h2 (f2 (x)) =
1−
if −5 ≤ f2 (x) ≤ −3

2

 0
if f2 (x) ≤ −5
Let us suppose the decision maker chooses the minimum operator to represent his evaluation of the connective and in the problem of: maximize the
first objective and maximize the second objective.
Then the original biobjective problem turns into the single-objetive LP,
max min{h1 (f1 (x1 , x2 )), h2 (f2 (x1 , x2 ))}
subject to
x1 + x2 ≤ 4,
3x1 + x2 ≥ 6,
x1 , x2 ≥ 0.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 10: Application functions for MOP problems
54
That is,
subject to
max min{h1 (2x1 + x2 ), h2 (−x1 − 2x2 )}
x1 + x2 ≤ 4,
3x1 + x2 ≥ 6,
x1 , x2 ≥ 0.
Which can be written in the form,
max λ
subject to
h1 (2x1 + x2 ) ≥ λ
h2 (−x1 − 2x2 ) ≥ λ
x1 + x2 ≤ 4,
3x1 + x2 ≥ 6,
x1 , x2 ≥ 0.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 10: Application functions for MOP problems
55
That is
max λ
subject to
7 − (2x1 + x2 )
≥λ
3
− 3 − (−x1 − 2x2 )
1−
≥λ
2
x1 + x2 ≤ 4,
1−
3x1 + x2 ≥ 6,
x1 , x2 ≥ 0.
Its optimal solution
x∗ = (23/7, 0)
is also an efficient solution for the original biobjective problem since it lies
in the segment
{(x1 , 0), x1 ∈ [2, 4]}.
The optimal values of the objective functions are 46/7 and −23/7.
Toc
JJ
II
J
I
Back
J Doc
Doc I
56
11. The efficiency of compromise solutions
One of the most important questions is the efficiency of the obtained compromise solutions.
Theorem 11.1. Let x∗ be an optimal solution to
max T (H1 (x), . . . , Hk (x))
x∈X
where T is a t-norm, Hi (x) = hi (fi (x)), hi is an increasing application
function, i = 1, . . . , k. If hi is strictly increasing on the interval [ri , Ri ] for
i = 1, . . . , k. Then x∗ is efficient for the problem
max f1 (x), . . . , fk (x)
x∈X
if either
(i) x∗ is unique;
(ii) T is strict and Hi (x∗ ) = hi (fi (x∗ )) ∈ (0, 1) for i = 1, . . . , k.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 11: The efficiency of compromise solutions
57
Proof. (i) Suppose that x∗ is not efficient. If x∗ were dominated, then x∗∗ ∈
X such that
fi (x∗ ) ≤ fi (x∗∗ )
for all i and with a strict inequality for at least one i.
Consequently, from the monotonicity of T and hi we get
T (H1 (x∗ ), . . . , Hk (x∗ )) ≤ T (H1 (x∗∗ ), . . . , Hk (x∗∗ ))
which means that x∗∗ is also an optimal solution to the auxiliary problem.
So x∗ is not unique.
(ii) Suppose that x∗ is not efficient. If x∗ were dominated, then x∗∗ ∈ X
such that
fi (x∗ ) ≤ fi (x∗∗ )
for all i and with a strict inequality for at least one i. Taking into consideration that
Hi (x∗ ) = hi (fi (x∗ )) ∈ (0, 1)
Toc
JJ
II
J
I
Back
J Doc
Doc I
Section 11: The efficiency of compromise solutions
58
for all i and T is strict, and hi is monoton increasing we get
T (H1 (x∗ ), . . . , Hk (x∗ )) < T (H1 (x∗∗ ), . . . , Hk (x∗∗ ))
which means that x∗ is not an optimal solution to the auxiliary problem. So
x∗ is not efficient.
If we use linear application functions then they are strictly increasing on
[ri , Ri ], and, therefore any optimal solution x∗ to the auxiliary problem is
an efficient solution to the original MOP problem if either
(i) x∗ is unique;
(ii) T is strict and Hi (x∗ ) ∈ (0, 1), i = 1, . . . k.
Toc
JJ
II
J
I
Back
J Doc
Doc I
Download