On obtaining minimal variability OWA operator weights ∗ Robert Full´er

advertisement
On obtaining minimal variability OWA operator
weights ∗
Robert Fullér
rfuller@abo.fi
Péter Majlender
peter.majlender@mail.abo.fi
Abstract
One important issue in the theory of Ordered Weighted Averaging (OWA)
operators is the determination of the associated weights. One of the first approaches, suggested by O’Hagan, determines a special class of OWA operators having maximal entropy of the OWA weights for a given level of orness; algorithmically it is based on the solution of a constrained optimization
problem. Another consideration that may be of interest to a decision maker
involves the variability associated with a weighting vector. In particular, a
decision maker may desire low variability associated with a chosen weighting vector. In this paper, using the Kuhn-Tucker second-order sufficiency
conditions for optimality, we shall analytically derive the minimal variablity
weighting vector for any level of orness.
1
Introduction
The process of information aggregation appears in many applications related to
the development of intelligent systems. One sees aggregation in neural networks,
fuzzy logic controllers, vision systems, expert systems and multi-criteria decision
aids. In [6] Yager introduced a new aggregation technique based on the ordered
weighted averaging operators.
Definition 1.1 An OWA operator of dimension n is a mapping F : Rn → R that
has an associated weighting vector W = (w1 , . . . , wn )T of having the properties
w1 + · · · + wn = 1, 0 ≤ wi ≤ 1, i = 1, . . . , n, and such that
F (a1 , . . . , an ) =
n
X
wi bi ,
i=1
∗
The final version of this paper appeared in: Fuzzy Sets and Systems, 136(2003) 203-215..
1
where bj is the jth largest element of the collection of the aggregated objects
{a1 , . . . , an }.
A fundamental aspect of this operator is the re-ordering step, in particular an
aggregate ai is not associated with a particular weight wi but rather a weight is
associated with a particular ordered position of aggregate. When we view the OWA
weights as a column vector we shall find it convenient to refer to the weights with
the low indices as weights at the top and those with the higher indices with weights
at the bottom. It is noted that different OWA operators are distinguished by their
weighting function.
In [6], Yager introduced a measure of orness associated with the weighting
vector W of an OWA operator, defined as
n
X
n−i
orness(W ) =
· wi .
n−1
i=1
and it characterizes the degree to which the aggregation is like an or operation. It
is clear that orness(W ) ∈ [0, 1] holds for any weighting vector.
It is clear that the actual type of aggregation performed by an OWA operator
depends upon the form of the weighting vector [9]. A number of approaches have
been suggested for obtaining the associated weights, i.e., quantifier guided aggregation [6, 7], exponential smoothing [3] and learning [10].
Another approach, suggested by O’Hagan [5], determines a special class of
OWA operators having maximal entropy of the OWA weights for a given level
of orness. This approach is based on the solution of he following mathematical
programming problem:
maximize
n
X
disp(W ) = −
wi ln wi
i=1
n
X
n−i
subject to orness(W ) =
· wi = α, 0 ≤ α ≤ 1
n−1
(1)
i=1
w1 + · · · + wn = 1, 0 ≤ wi , i = 1, . . . , n.
In [4] we transfered problem (1) to a polynomial equation which is then was
solved to determine the maximal entropy OWA weights.
Another interesting question is to determine the minimal variablity weighting
vector under given level of orness [8]. We shall measure the variance of a given
2
weighting vector as
n
X
1
D (W ) =
· (wi − E(W ))2
n
i=1
2
X
n
n
1X
1 n
1X
1
2
=
wi −
wi =
wi2 − 2 .
n
n
n
n
2
i=1
i=1
i=1
where E(W ) = (w1 + · · · + wn )/n stands for the arithmetic mean of weights.
To obtain minimal variablity OWA weights under given level of orness we need
to solve the following constrained mathematical programming problem
D2 (W ) =
minimize
n
1 X
1
·
wi2 − 2
n
n
i=1
n
X
n−i
subject to orness(w) =
· wi = α, 0 ≤ α ≤ 1,
n−1
(2)
i=1
w1 + · · · + wn = 1, 0 ≤ wi , i = 1, . . . , n.
Using the Kuhn-Tucker second-order sufficiency conditions for optimality, we shall
solve problem (2) analytically and derive the exact minimal variablity OWA weights
for any level of orness.
2
Obtaining minimal variability OWA weights
Let us consider the constrained optimization problem (2). First we note that if
n = 2 then from orness(w1 , w2 ) = α the optimal weights are uniquely defined as
w1∗ = α and w2∗ = 1 − α. Furthemore, if α = 0 or α = 1 then the associated
weighting vectors are uniquely defined as (0, 0, . . . , 0, 1)T and (1, 0, . . . , 0, 0)T ,
respectively, with variability
D2 (1, 0, . . . , 0, 0) = D2 (0, 0, . . . , 0, 1) =
1
1
− 2.
n n
Suppose now that n ≥ 3 and 0 < α < 1. Let us
X
X
n
n
n
n−i
1X
1
2
L(W, λ1 , λ2 ) =
wi − 2 + λ 1
wi − 1 + λ 2
wi − α .
n
n
n−1
i=1
i=1
3
i=1
denote the Lagrange function of constrained optimization problem (2), where
λ1 and λ2 are real numbers. Then the partial derivatives of L are computed as
∂L
2wj
n−j
=
+ λ1 +
· λ2 = 0, 1 ≤ j ≤ n,
∂wj
n
n−1
n
X
∂L
=
wi − 1 = 0,
∂λ1
∂L
=
∂λ2
i=1
n
X
i=1
(3)
n−i
· wi − α = 0.
n−1
We shall suppose that the optimal weighting vector has the following form
W = (0, . . . , 0, wp , . . . , wq , 0 . . . , 0)T
(4)
where 1 ≤ p < q ≤ n and use the notation
I{p,q} = {p, p + 1, . . . , q − 1, q},
for the indexes from p to q. So, wj = 0 if j ∈
/ I{p,q} and wj ≥ 0 if j ∈ I{p,q} .
For j = p we find that
2wp
n−p
∂L
=
+ λ1 +
· λ2 = 0,
∂wp
n
n−1
and for j = q we get
2wq
n−q
∂L
=
+ λ1 +
· λ2 = 0.
∂wq
n
n−1
That is,
2(wp − wq ) q − p
+
· λ2 = 0
n
n−1
and therefore, the optimal values of λ1 and λ2 (denoted by λ∗1 and λ∗2 ) should
satisfy the following equations
n−p
n−1 2
2 n−q
λ∗1 =
· wp −
· wq
and λ∗2 =
· · (wq − wp ). (5)
n q−p
q−p
q−p n
Substituting λ∗1 for λ1 and λ∗2 for λ2 in (3) we get
2
2 n−q
n−p
n−j n−1 2
· wj +
· wp −
· wq +
·
· · (wq − wp ) = 0.
n
n q−p
q−p
n−1 q−p n
4
That is the jth optimal weight should satisfy the equation
wj∗ =
q−j
j−p
· wp +
· wq , j ∈ I{p,q} .
q−p
q−p
(6)
From representation (4) we get
q
X
wi∗ = 1,
i=p
that is,
q X
q−i
i=p
i−p
· wp +
· wq
q−p
q−p
= 1,
i.e.
wp + wq =
2
.
q−p+1
From the constraint orness(w) = α we find
q
q
q
X
X
X
n−i
n−i q−i
n−i i−p
· wi =
·
· wp +
·
· wq = α,
n−1
n−1 q−p
n−1 q−p
i=p
i=p
i=p
that is,
2(2q + p − 2) − 6(n − 1)(1 − α)
,
(q − p + 1)(q − p + 2)
(7)
6(n − 1)(1 − α) − 2(q + 2p − 4)
2
− wp∗ =
.
q−p+1
(q − p + 1)(q − p + 2)
(8)
wp∗ =
and
wq∗ =
The optimal weighting vector
W ∗ = (0, . . . , 0, wp∗ , . . . , wq∗ , 0 . . . , 0)T
is feasible if and only if wp∗ , wq∗ ∈ [0, 1], because according to (6) any other wj∗ ,
j ∈ I{p,q} is computed as their convex linear combination.
Using formulas (7) and (8) we find
wp∗ , wq∗
1 2q + p − 2
1 q + 2p − 4
∈ [0, 1] ⇐⇒ α ∈ 1 − ·
,1 − ·
3
n−1
3
n−1
5
The following (disjunctive) partition of the unit interval (0, 1) will be crucial in
finding an optimal solution to problem (2):
(0, 1) =
n−1
[
Jr,n ∪ J1,n ∪
r=2
n−1
[
J1,s .
(9)
s=2
where
1 2n + r − 2
1 2n + r − 3
, r = 2, . . . , n − 1,
·
, 1− ·
3
n−1
3
n−1
1 2n − 1
1 n−2
= 1− ·
,
, 1− ·
3 n−1
3 n−1
1 s−2
1 s−1
, s = 2, . . . , n − 1.
, 1− ·
= 1− ·
3 n−1
3 n−1
Jr,n =
J1,n
J1,s
1−
Consider again problem (2) and suppose that α ∈ Jr,s for some r and s from
partition (9). Such r and s always exist for any α ∈ (0, 1), furthermore, r = 1 or
s = n should hold.
Then
W ∗ = (0, . . . , 0, wr∗ , . . . , ws∗ , 0 . . . , 0)T ,
(10)
where
wj∗ = 0, if j ∈
/ I{r,s} ,
2(2s + r − 2) − 6(n − 1)(1 − α)
,
(s − r + 1)(s − r + 2)
6(n − 1)(1 − α) − 2(s + 2r − 4)
,
ws∗ =
(s − r + 1)(s − r + 2)
s−j
j−r
wj∗ =
· wr +
· ws , if j ∈ I{r+1,s−1} .
s−r
s−r
wr∗ =
(11)
and I{r+1,s−1} = {r + 1, . . . , s − 1}. We note that if r = 1 and s = n then we
have
1 2n − 1
1 n−2
α ∈ J1,n = 1 − ·
, 1− ·
,
3 n−1
3 n−1
and
W ∗ = (w1∗ , . . . , wn∗ )T ,
6
where
2(2n − 1) − 6(n − 1)(1 − α)
,
n(n + 1)
6(n − 1)(1 − α) − 2(n − 2)
,
wn∗ =
n(n + 1)
n−j
j−1
wj∗ =
· w1 +
· wn , if j ∈ {2, . . . , n − 1}.
n−1
n−1
w1∗ =
Furthermore, from the construction of W ∗ it is clear that
n
X
wi∗
=
i=1
s
X
wi∗ = 1, wi∗ ≥ 0, i = 1, 2, . . . , n,
i=r
and orness(W ∗ ) = α, that is, W ∗ is feasible for problem (2).
We will show now that W ∗ , defined by (10), satisfies the Kuhn-Tucker secondorder sufficiency conditions for optimality ([2], page 58). Namely,
(i) There exist λ∗1 , λ∗2 ∈ R and µ∗1 ≥ 0, . . . , µ∗n ≥ 0 such that,
X
n−1
n
X n−i
∂
D2 (W ) + λ∗1
wi − 1 + λ∗2
· wi − α
∂wk
n−1
i=1
i=1
n
X
+
µ∗j (−wj ) =0
∗
W =W
j=1
for 1 ≤ k ≤ n and µ∗j wj∗ = 0, j = 1, . . . , n.
(ii) W ∗ is a regular point of the constraints,
(iii) The Hessian matrix,
X
n−1
n
X n−i
∂2
2
∗
∗
D
(W
)
+
λ
w
−
1
+
λ
·
w
−
α
i
i
1
2
∂W 2
n−1
i=1
i=1
n
X
∗
+
µj (−wj ) j=1
W =W ∗
is positive definite on
T
T
T
X̂ = y h1 y = 0, h2 y = 0 and gj y = 0 for all j with µj > 0 ,
7
(12)
where
h1 =
n−1 n−2
1
,
, ...,
,0
n−1 n−1
n−1
T
,
(13)
and
h2 = (1, 1, . . . , 1, 1)T .
(14)
are the gradients of linear equality constraints, and
jth
z}|{
gj = (0, 0, . . . , 0, −1 , 0, 0, . . . , 0)T
(15)
is the gradient of the jth linear inequality constraint of problem (2).
Proof 1 See Fuzzy Sets and Systems, 136(2003) 203-215.
So, the objective function D2 (W ) has a local minimum at point W = W ∗ on
X=
n
n
X
X
n−i
· wi = α
W ∈ R W ≥ 0,
wi = 1,
n−1
n
i=1
(16)
i=1
where X is the set of feasible solutions of problem (2). Taking into consideration
that D2 : Rn → R is a strictly convex, bounded and continuous function, and X is
a convex and compact subset of Rn , we can conclude that D2 attains its (unique)
global minimum on X at point W ∗ .
3
Example
In this Section we will determine the minimal variability five-dimensional weighting vector under orness levels α = 0, 0.1, . . . , 0.9 and 1.0. First, we construct the
corresponding partition as
(0, 1) =
4
[
4
[
Jr,5 ∪ J1,5 ∪
r=2
J1,s .
s=2
where
Jr,5 =
1 5−r−1 1 5−r
4−r 5−r
·
, ·
=
,
,
3
5−1
3 5−1
12
12
for r = 2, 3, 4 and
J1,5 =
1 5 − 2 1 10 − 1
·
, ·
3 5−1 3 5−1
8
=
3 9
,
,
12 12
and
1 s−1
1 s−2
13 − s 14 − s
=
,
J1,s = 1 − ·
, 1− ·
,
3 5−1
3 5−1
12
12
for s = 2, 3, 4, and, therefore we get,
1
1 2
2 3
3 9
(0, 1) = 0,
∪
∪
∪
,
,
,
12
12 12
12 12
12 12
9 10
10 11
11 12
∪
∪
∪
.
,
,
,
12 12
12 12
12 12
Without loss of generality we can assume that α < 0.5, because if a weighting
vector W is optimal for problem (2) under some given degree of orness, α < 0.5,
then its reverse, denoted by W R , and defined as
wiR = wn−i+1
is also optimal for problem (2) under degree of orness (1 − α). Really, as was
shown by Yager [7], we find that
D2 (W R ) = D2 (W ) and orness(W R ) = 1 − orness(W ).
Therefore, for any α > 0.5, we can solve problem (2) by solving it for level of
orness (1 − α) and then taking the reverse of that solution.
Then we obtain the optimal weights from (11) as follows
• if α = 0 then W ∗ (α) = W ∗ (0) = (0, 0, . . . , 0, 1)T and, therefore,
W ∗ (1) = (W ∗ (0))R = (1, 0, . . . , 0, 0)T .
• if α = 0.1 then
1 2
α ∈ J3,5 =
,
,
12 12
and the associated minimal variablity weights are
w1∗ (0.1) = 0,
w2∗ (0.1) = 0,
2(10 + 3 − 2) − 6(5 − 1)(1 − 0.1) 0.4
=
= 0.0333,
(5 − 3 + 1)(5 − 3 + 2)
12
2
w5∗ (0.1) =
− w3∗ (0.1) = 0.6334,
5−3+1
1
1
w4∗ (0.1) = · w3∗ (0.1) + · w5∗ (0.1) = 0.3333,
2
2
w3∗ (0.1) =
9
So,
W ∗ (α) = W ∗ (0.1) = (0, 0, 0.033, 0.333, 0.633)T ,
and, consequently,
W ∗ (0.9) = (W ∗ (0.1))R = (0.633, 0.333, 0.033, 0, 0)T .
with variance D2 (W ∗ (0.1)) = 0.0625.
• if α = 0.2 then
α ∈ J2,5 =
2 3
,
12 12
and in a similar manner we find that the associated minimal variablity weighting vector is
W ∗ (0.2) = (0.0, 0.04, 0.18, 0.32, 0.46)T ,
and, therefore,
W ∗ (0.8) = (0.46, 0.32, 0.18, 0.04, 0.0)T ,
with variance D2 (W ∗ (0.2)) = 0.0296.
• if α = 0.3 then
α ∈ J1,5 =
3 9
,
12 12
and in a similar manner we find that the associated minimal variablity weighting vector is
W ∗ (0.3) = (0.04, 0.12, 0.20, 0.28, 0.36)T ,
and, therefore,
W ∗ (0.7) = (0.36, 0.28, 0.20, 0.12, 0.04)T ,
with variance D2 (W ∗ (0.3)) = 0.0128.
• if α = 0.4 then
α ∈ J1,5 =
3 9
,
12 12
and in a similar manner we find that the associated minimal variablity weighting vector is
W ∗ (0.4) = (0.12, 0.16, 0.20, 0.24, 0.28)T ,
10
and, therefore,
W ∗ (0.6) = (0.28, 0.24, 0.20, 0.16, 0.12)T ,
with variance D2 (W ∗ (0.4)) = 0.0032.
• if α = 0.5 then
W ∗ (0.5) = (0.2, 0.2, 0.2, 0.2, 0.2)T .
with variance D2 (W ∗ (0.5)) = 0.
4
Summary
We have extended the power of decision making with OWA operators by introducing considerations of minimizing variability into the process of selecting the
optimal alternative. Of particular significance in this work is the development of a
methodology to calculate the minimal variability weights in an analytic way.
References
[1] R.G. Brown, Smoothing, Forecasting and Prediction of Discrete Time Series
(Prentice-Hall, Englewood Cliffs, 1963).
[2] V. Chankong and Y. Y. Haimes, Multiobjective Decision Making: Theory
and Methodology (North-Holland, 1983).
[3] D. Filev and R. R. Yager, On the issue of obtaining OWA operator weights,
Fuzzy Sets and Systems, 94(1998) 157-169.
[4] R. Fullér and P. Majlender, An analytic approach for obtaining maximal
entropy OWA operator weights, Fuzzy Sets and Systems, 2001 (to appear).
[5] M. O’Hagan, Aggregating template or rule antecedents in real-time expert
systems with fuzzy set logic, in: Proc. 22nd Annual IEEE Asilomar Conf.
Signals, Systems, Computers, Pacific Grove, CA, 1988 81-689.
[6] R.R.Yager, Ordered weighted averaging aggregation operators in multicriteria decision making, IEEE Trans. on Systems, Man and Cybernetics,
18(1988) 183-190.
[7] R.R.Yager, Families of OWA operators, Fuzzy Sets and Systems, 59(1993)
125-148.
11
[8] R.R.Yager, On the Inclusion of variance in decision making under uncertainty, International Journal of Uncertainty, Fuzziness and KnowledgeBased Systems, 4(1995) 401-419.
[9] R.R.Yager, On the analytic representation of the Leximin ordering and its
application to flexible constraint propagation, European Journal of Operational Research, 102(1997) 176-192.
[10] R. R. Yager and D. Filev, Induced ordered weighted averaging operators,
IEEE Trans. on Systems, Man and Cybernetics – Part B: Cybernetics,
29(1999) 141-150.
5
Citations
[A10] Robert Fullér and Péter Majlender, On obtaining minimal variability
OWA operator weights, FUZZY SETS AND SYSTEMS, 136(2003) 203-215.
[MR1980864]
in journals
A10-c36 Xinwang Liu and Shilian Han, Orness and parameterized RIM quantifier aggregation with OWA operators: A summary, International Journal of
Approximate Reasoning, Volume 48, Issue 1, Pages 77-97. 2008
http://dx.doi.org/10.1016/j.ijar.2007.05.006
A10-c35 Ronald R. Yager, A knowledge-based approach to adversarial decision
making, International Journal of Intelligent Systems, Volume 23, Issue 1, pp.
1-21. 2008
http://dx.doi.org/10.1002/int.20254
A10-c34 R.R. Yager, Prioritized aggregation operators, International Journal of
Approximate Reasoning, Volume 48, Issue 1, April 2008, Pages 263-274.
2008
http://dx.doi.org/10.1016/j.ijar.2007.08.009
A10-c33 B.S. Ahn, H. Park, Least-squared ordered weighted averaging operator
weights, International Journal of Intelligent Systems, 23(1), pp. 33-49. 2008
http://dx.doi.org/10.1002/int.20257
12
A10-c32 B.S. Ahn, Preference relation approach for obtaining OWA operators
weights, International Journal of Approximate Reasoning, 47 (2), pp. 166178. 2008
http://dx.doi.org/10.1016/j.ijar.2007.04.001
Instead of maximizing the degree of dispersion, Fuller and Majlender A10] presented a method for deriving the minimal variability weighting vector for any level of orness, using KuhnTucker
second-order sufficiency conditions for optimality. (page 167)
A10-c31 B.S. Ahn, The OWA aggregation with uncertain descriptions on weights
and input arguments, IEEE Transactions on Fuzzy Systems 15 (6), pp. 11301134. 2007
http://dx.doi.org/10.1109/TFUZZ.2007.895945
A10-c30 L. Zhenbang, Z. Lihua, A hybrid fuzzy cognitive model based on weighted
OWA operators and single-antecedent rules, International Journal of Intelligent Systems 22 (11), pp. 1189-1196. 2007
http://dx.doi.org/10.1002/int.20243
A10-c29 Xinwang Liu and Hsinyi Lin, Parameterized approximation of fuzzy
number with minimum variance weighting functions, MATHEMATICAL
AND COMPUTER MODELLING, 46 (2007) 1398-1409. 2007
http://dx.doi.org/10.1016/j.mcm.2007.01.011
The problem of getting the minimum variance OWA weighting
vector was proposed by Fullér and Majlender [A10] to get minimum variance OWA weighting vector with a given orness level.
(page 1400)
A10-c28 Llamazares, B., Choosing OWA operator weights in the field of Social
Choice, INFORMATION SCIENCES, 177 (21), pp. 4745-4756. 2007.
http://dx.doi.org/10.1016/j.ins.2007.05.015
To avoid the resolution of a nonlinear optimization problem, Yager
[32] introduces a simpler procedure that tries to keep the spirit of
maximizing the entropy for a given level of orness. Similar approaches (through a fixed orness level) have also been proposed
by Fullér and Majlender [A10], (page 4746)
13
A10-c27 Leon, T., Zuccarello, P., Ayala, G., de Ves, E., Domingo, J., Applying
logistic regression to relevance feedback in image retrieval systems, PATTERN RECOGNITION, 40 (10), pp. 2621-2632. 2007
http://dx.doi.org/10.1016/j.patcog.2007.02.002
A10-c26 Wang, Y.-M., Luo, Y., Hua, Z., Aggregating preference rankings using
OWA operator weights, INFORMATION SCIENCES, 177 (16), pp. 33563363. 2007
http://dx.doi.org/10.1016/j.ins.2007.01.008
The minimum variance method proposed by Fullér and Majlender
[A10], which minimizes the variance of OWA operator weights
under a given level of orness and requires the solution of the following mathematical programming model: (page 3358)
A10-c25 Amin, G.R., Notes on properties of the OWA weights determination
model, COMPUTERS & INDUSTRIAL ENGINEERING, 52 (4), pp. 533538. 2007
http://dx.doi.org/10.1016/j.cie.2007.03.002
An important issue related to the theory and application of OWA
operators is the determination of the weights of the operators,
(Amin & Emrouznejad, 2006; Fuller & Majlender, 2003; O’Hagan,
1988; Wang & Parkan, 2005). (page 533)
A10-c24 Liu, X., The solution equivalence of minimax disparity and minimum
variance problems for OWA operators, INTERNATIONAL JOURNAL OF
APPROXIMATE REASONING, 45 (1), pp. 68-81. 2007
http://dx.doi.org/10.1016/j.ijar.2006.06.004
Apart from MEOWA operator, Fullér and Majlender [A10] suggested a minimum variance approach to obtain the minimal variability OWA weighting vector under given orness level with a
quadratic programming problem. They also proposed an analytical method to get the optimal solution. (page 69)
In this section, we will prove the MSEOWA operator is the optimal solution of the minimum variance problem for OWA operator
which was proposed by Fullér and Majlender [A10]. (page 73)
14
Fullér and Majlender [A10] proposed an analytical approach for
(23) by dividing (0, 1] into 2n − 1 subintervals to decide which
subinterval the given level of orness lies in. The MSEOWA weighting vector generating method can be seen as an alternative expression of the unique optimal solution, with the parameters being
expressed directly. (page 75)
A10-c23 Wang YM, Parkan C, A preemptive goal programming method for aggregating OWA operator weights in group decision making, INFORMATION SCIENCES, 177 (8): 1867-1877 APR 15 2007
http://dx.doi.org/10.1016/j.ins.2006.07.023
Other approaches include Nettleton and Torra’s [7] genetic algorithm (GA), Fullér and Majlender’s [A10] minimum variance
method, which produces minimum variability OWA operator weights,
and Liu and Chen’s [5] parametric geometric method, which could
be used to obtain maximum entropy weights. (page 1868)
Other approaches Fullér and Majlender [A10] suggested a minimum variance approach, which minimizes the variance of OWA
operator weights under a given degree of orness. A set of OWA
operator weights with minimal variability could then be generated. (page 1869)
A10-c22 Wang YM, Luo Y, Liu XW, Two new models for determining OWA
operator weights, COMPUTERS & INDUSTRIAL ENGINEERING 52 (2):
203-209 MAR 2007
http://dx.doi.org/10.1016/j.cie.2006.12.002
Fullér and Majlender (2003) also suggested a minimum variance
method to obtain the minimal variability OWA operator weights.
(page 203)
A10-c21 Wu J, Liang CY, Huang YQ An argument-dependent approach to determining OWA operator weights based on the rule of maximum entropy INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 22 (2): 209221 FEB 2007
http://doi.wiley.com/10.1002/int.20201
15
A10-c20 Hong DH A note on the minimal variability OWA operator weights INTERNATIONAL JOURNAL OF UNCERTAINTY FUZZINESS AND KNOWLEDGEBASED SYSTEMS 14 (6): 747-752 DEC 2006
A10-c19 Liu XW An orness measure for quasi-arithmetic means, IEEE TRANSACTIONS ON FUZZY SYSTEMS, 14 (6): 837-848 DEC 2006
http://dx.doi.org/10.1109/TFUZZ.2006.879990
A10-c18 Liu XW, Lou HW Parameterized additive neat OWA operators with
different orness levels, INTERNATIONAL JOURNAL OF INTELLIGENT
SYSTEMS, 21 (10): 1045-1072 OCT 2006
http://doi.wiley.com/10.1002/int.20176
The idea of getting the minimum variance weighting vector was
proposed by Yager. [38] Then it was applied to the OWA operator
by Fullér and Majlender [A10] to get the minimum variance OWA
weighting vector with a given orness level. (pages 1061-1062)
A10-c17 Liu, X. Reply to ”Comments on the paper: On the properties of equidifferent OWA operator”, INTERNATIONAL JOURNAL OF APPROXIMATE
REASONING, 43 (1), pp. 113-115. 2006
http://dx.doi.org/10.1016/j.ijar.2006.02.004
A10-c16 Liu XW On the properties of equidifferent OWA operator, INTERNATIONAL JOURNAL OF APPROXIMATE REASONING, 43 (1): 90-107
SEP 2006
http://dx.doi.org/10.1016/j.ijar.2005.11.003
First, we will prove the equivalence of MSEOWA operator and
the minimum variance OWA operator proposed by Fullér and Majlender [A10]. (page 96)
A10-c15 Amin GR, Emrouznejad A, An extended minimax disparity to determine
the OWA operator weights, COMPUTERS & INDUSTRIAL ENGINEERING 50 (3): 312-316 JUL 2006
http://dx.doi.org/10.1016/j.cie.2006.06.006
An important issue related to the theory and application of OWA
operators is the determination of the weights of the operators
(Fuller & Majlender, 2003; O’Hagan, 1988; Wang & Parkan,
2005). (page 312)
16
A10-c14 Ahn BS On the properties of OWA operator weights functions with constant level of orness, IEEE TRANSACTIONS ON FUZZY SYSTEMS 14
(4): 511-515 AUG 2006
http://dx.doi.org/10.1109/TFUZZ.2006.876741
It is clear that the actual results of aggregation performed by the
OWA operators depend on the forms of the weighting vectors,
which play a key role in the aggregation process. Filev and Yager
[3] presented a way of obtaining the weights associated with the
OWA aggregation in a situation in which data on the arguments
and the aggregated value have been observed (see [2], [A13],
[A10], and [7] for the methods of determining the OWA weights).
(page 511)
A10-c13 Marchant T Maximal orness weights with a fixed variability for owa operators, INTERNATIONAL JOURNAL OF UNCERTAINTY FUZZINESS
AND KNOWLEDGE-BASED SYSTEMS 14 (3): 271-276 JUN 2006
A10-c12 Liu XW, Some properties of the weighted OWA operator, IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B - CYBERNETICS, 36(1): 118-127 FEB 2006
http://dx.doi.org/10.1109/TSMCA.2005.854496
A10-c11 Smith PN, Flexible aggregation in multiple attribute decision making:
Application to the Kuranda Range Road upgrade, CYBERNETICS AND
SYSTEMS, 37(1): 1-22 JAN-FEB 2006
A10-c10 Liu XW, On the properties of equidifferent RIM quantifier with generating function, INTERNATIONAL JOURNAL OF GENERAL SYSTEMS 34
(5): 579-594 OCT 2005
http://dx.doi.org/10.1080/00268970500067880
A10-c9 Ying-Ming Wang, Celik Parkan, A minimax disparity approach for obtaining OWA operator weights, INFORMATION SCIENCES 175(2005) 2029. 2005
http://dx.doi.org/10.1016/j.ins.2004.09.003
Fullér and Majlender [A10] suggested a minimum variance approach, which minimizes the variance of OWA operator weights
under a given level of orness. A set of OWA operator weights
17
with minimal variability could then be generated. Their approach
requires the solution of the following mathematical programming
problem:
n n
1 X
1 X
1 2
1
2
minimize D (W ) = ·
= ·
wi −
wi2 − 2
n
n
n
n
subject to
i=1
n
X
orness(w) =
i=1
i=1
n−i
· wi = α, 0 ≤ α ≤ 1,
n−1
w1 + · · · + wn = 1, 0 ≤ wi , i = 1, . . . , n.
(page 23)
A10-c8 Smith, P.N. Andness-directed weighted averaging operators: Possibilities
for environmental project evaluation, Journal of Environmental Systems, 30
(4), pp. 333-348. 2003
in proceedings
A10-c7 B. Llamazares, J.L. Garcia-Lapresta, Extension of some voting systems to
the field of gradual preferences, in: Bustince, Humberto; Herrera, Francisco;
Montero, Javier (Eds.) Fuzzy Sets and Their Extensions: Representation,
Aggregation and Models Intelligent Systems from Decision Making to Data
Mining, Web Intelligence and Computer Vision Series: Studies in Fuzziness
and Soft Computing, Vol. 220, Springer, [ISBN: 978-3-540-73722-3] 2008,
pp. 297-315. 2008
http://dx.doi.org/10.1007/978-3-540-73723-0 15
A10-c6 Benjamin Fonooni, Behzad Moshiri and Caro Lucas, Applying Data Fusion in a Rational Decision Making with Emotional Regulation, in: 50 Years
of Artificial Intelligence, Essays Dedicated to the 50th Anniversary of Artificial Intelligence, Lecture Notes in Computer Science, Volume 4850/2007,
Springer, 2007, pp. 320-331. 2007
http://dx.doi.org/10.1007/978-3-540-77296-5 29
A10-c5 M.Zarghaami, R. Ardakanian, R., F. Szidarovszky, Sensitivity analysis
of an information fusion tool: OWA operator in: Belur V. Dasarathy ed.,
Proceedings of SPIE - The International Society for Optical Engineering
6571, Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2007, art. no. 65710P, 2007
http://dx.doi.org/10.1117/12.722323
18
A10-c4 Felix, R., Efficient decision making with interactions between goals, Proceedings of the 2007 IEEE Symposium on Computational Intelligence in
Multicriteria Decision Making, MCDM 2007, art. no. 4223007, pp. 221226. 2007
http://dx.doi.org/10.1109/MCDM.2007.369441
A10-c3 M.Zarghaami, R. Ardakanian, R., F. Szidarovszky, Obtaining robust decisions under uncertainty by sensitivity analysis on OWA operator, Proceedings of the 2007 IEEE Symposium on Computational Intelligence in Multicriteria Decision Making, MCDM 2007, art. no. 4223017, pp. 280-287.
2007
http://dx.doi.org/10.1109/MCDM.2007.369102
A10-c2 Beliakov, G., Pradera, A., Calvo, T., Other types of aggregation and additional properties, in: Aggregation Functions: A Guide for Practitioners,
Studies in Fuzziness and Soft Computing, Vol. 221, [ISBN 978-3-54073720-9], Springer, pp. 297-304. 2007
http://dx.doi.org/ 10.1007/978-3-540-73721-6 7
A10-c1 Zadrozny S, Kacprzyk J On tuning OWA operators in a flexible querying
interface, In: Flexible Query Answering Systems, 7th International Conference, FQAS 2006, LECTURE NOTES IN COMPUTER SCIENCE, vol.
4027, pp. 97-108. 2006
http://dx.doi.org/10.1007/11766254 9
Fuller and Majlender considered also the variance of an OWA
operator defined as:
..
.
Then they proposed [A10] a class of OWA operators analogous
to MEOWA (14) where the variance instead of dispersion is maximized. Also in this case Fuller and Majlender developed analytical formulae for the weight vector W . (page 101)
19
Download