Manipulation in Group Argument Evaluation

advertisement
Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence
Manipulation in Group Argument Evaluation
Martin Caminada∗
SnT
Université du Luxembourg
Luxembourg
Gabriella Pigozzi
LAMSADE
Université Paris-Dauphine
Paris, France
Mikołaj Podlaszewski
ILIAS / SnT
Université du Luxembourg
Luxembourg
martin.caminada@uni.lu
gabriella.pigozzi@lamsade.dauphine.fr
mikolaj.podlaszewski@uni.lu
Abstract
are more acceptable than others. That is, a collective outcome is more acceptable than another if it is compatible and
more similar to one’s own position than the other. In order to
capture how much the various possible positions differ from
each other, we use the notion of distance among labellings.
Distance-based approaches have already been used to tackle
aggregation problems, like in social choice theory [Brams
et al., 2007], belief merging [Konieczny and Perez, 1998]
and its application to judgment aggregation [Pigozzi, 2006].
Thus, we say that a collective outcome is more acceptable
than another if it is compatible, but the distance to one’s own
labelling is smaller than the other.
The observations above give rise to two new research questions, to be addressed in the current paper:
(i) Are the social outcomes of the aggregation operators in
[Caminada and Pigozzi, 2011] Pareto optimal if preferences
between different outcomes are also taken into account?
(ii) Do agents have an incentive to misrepresent their own
opinion in order to obtain a more favourable outcome? And if
so, what are the effects from the perspective of social welfare?
We focus on the behaviour of two of the three aggregation
operators of [Caminada and Pigozzi, 2011]. Pareto optimality is a key principle of welfare economics which intuitively
stipulates that a social state cannot be further improved. Thus,
the first contribution of the paper is to study whether the compatible social outcomes selected by our aggregation operators
are Pareto optimal. In order to investigate Pareto optimality,
we consider the submitted labelling as the individual’s most
preferred option. By using a notion of distance, we derive
the individual preference ordering over the other permissible
labellings. We show that the two aggregation operators are
Pareto optimal, when a certain distance is used.
The second contribution is on the manipulability of the aggregation operators. Manipulability is usually considered to
be an undesirable property of social choice decision rules. If
an aggregation rule is manipulable, an individual may, upon
learning the preferences of the other agents, misrepresent his
input to ensure a social outcome that is better for him than it
would have been had he voted sincerely. Our findings show
that, while the two operators are manipulable, the sceptical
aggregation operator guarantees that an agent who lies does
not only ensure a preferable outcome for himself, but even
promotes social welfare, what we call a benevolent lie.
Section 2 outlines the abstract argumentation theory. In
Given an argumentation framework and a group of
agents, the individuals may have divergent opinions
on the status of the arguments. If the group needs
to reach a common position on the argumentation
framework, the question is how the individual evaluations can be mapped into a collective one. This
problem has been recently investigated by Caminada and Pigozzi. In this paper, we investigate the
behaviour of two of such operators from a social
choice-theoretic point of view. In particular, we
study under which conditions these operators are
Pareto optimal and whether they are manipulable.
1
Introduction
Individuals may draw different conclusions from the same information. Members of a jury may disagree on the verdict
even though each member possesses the same information on
the case under discussion. This happens because individuals can hold different reasonable positions on the information
they share. Hence, the question is how the group can reach a
common stance starting from the positions of each member.
In this paper we are interested in group decisions where
members share the same information. One of the principles
of argumentation theory is that an argumentation framework
can have several extensions/labellings. If the information the
group shares is represented by an argumentation framework,
and each agent’s reasonable position is an extension/labelling
of that argumentation framework, the question is how to aggregate the individual positions into a collective one.
Caminada and Pigozzi [Caminada and Pigozzi, 2011] have
studied this issue in abstract argumentation and provided
three aggregation operators. The key property of these operators is that the collective outcome is ‘compatible’ with each
individual position. That is, an agent who has to defend the
collective position in public will never have to argue directly
against his own private position.
The aim of this paper is to formalise and examine the intuition that, although every social outcome that is compatible with one’s own labelling is acceptable, some outcomes
∗
Supported by the National Research Fund, Luxembourg
(LAAMI project).
121
Section 3 we define preferences over the individual evalutations. Pareto optimality and manipulability issues are addressed in Section 4 and 5 respectively. Section 6 concludes.
For example, argument A of Fig. 1 is in as it has no defeaters. Argument B must be out since it has a defeater (argument A) and its defeater is in. Finally, argument C is in
since its only defeater (argument B) is out.
2
Definition 4 (Admissible labelling). An admissible labelling
is a labelling without arguments that are illegally in and
without arguments that are illegally out.
Preliminaries
2.1
Argumentation preliminaries
Definition 1 (Argumentation framework). Let U be the
universe of all possible arguments. An argumentation framework is a pair (Ar , def ) where Ar is a finite subset of U and
def ⊆ Ar × Ar .
Definition 5 (Complete labelling). A complete labelling is
a labelling without arguments that are illegally in, without
arguments that are illegally out and without arguments that
are illegally undec.
The basic difference between an extension [Dung, 1995] and
a labelling is that an extension only represents the arguments
that are accepted, whereas a labelling also represents the arguments that are rejected or left undecided. So labellings provide a slightly more expressive though equivalent way to express Dung’s theory of argumentation. For results on how
labellings relate to extensions, see [Caminada, 2006]
In essence, a labelling based semantics can be seen as a
function that, given an argumentation framework, yields zero
or more labellings, each of which can be seen as a reasonable
position that one can take on an argumentation framework.
We say that an argument A defeats (or attacks) an argument
B iff (A, B) ∈ def . For example, in Fig. 1, we have that A
attacks B and that B attacks C.
A
B
C
Figure 1: An argumentation framework.
Following [Caminada and Pigozzi, 2011], we use the argument labellings approach of [Caminada, 2006] rather than
Dung’s original extension approach [Dung, 1995]. The idea
of a labelling is to associate with each argument exactly one
label, which can either be in, out or undec. The label in indicates that the argument is explicitly accepted, out indicates
that the argument is explicitly rejected, and undec indicates
that one abstains from an explicit position on the argument.
2.2
Aggregation problem
We are now ready to formally summarize the problem of aggregation of individual labellings into a collective position on
a given argumentation framework. The definitions and results
in this section are from [Caminada and Pigozzi, 2011].
Given a set of individuals N = {1, . . . , n}, we need to
define a general labellings aggregation operator OAF that
assigns a collective labelling LColl to each profile
P = {L1 , . . . , Ln } of individual admissible labellings.
Definition 2 (Labelling). Let (Ar , def ) be an argumentation framework. A labelling is a total function L : Ar −→
{in, out, undec}.
We write in(L) for {A | L(A) = in}, out(L) for
{A | L(A) = out} and undec(L) for {A | L(A) =
undec}. Sometimes, we write a labelling L as a triple
(Args 1 , Args 2 , Args 3 ) where Args 1 = in(L), Args 2 =
out(L) and Args 3 = undec(L). When it only matters
whether an agent has a clear position (in or out) on an argument, we write dec(L) for in(L) ∪ out(L).
Although labellings allow one to express any position on
which arguments are accepted, rejected or left undecided,
some of these positions are more reasonable than others. Basically, the task of an argumentation semantics is to provide
a criterion for determining which positions can be considered
to be reasonable. Several such semantics have been defined in
the literature, but we will consider only the following ones.1
Definition 6 (Labelling aggregation operator OAF ). Let
Labellings be the set of all possible labellings of argumentation framework AF = (Ar , def ). A general labellings aggregation operator is a function OAF : 2Labellings − {∅} →
Labellings such that OAF ({L1 , . . . , Ln }) = LColl .
We are interested in aggregation operators that produce a
collective outcome compatible with the individual opinions.
The idea is to ensure that each member can publicly defend
the common decision without having to directly go against
his own position. Two notions of compatibility have been
introduced in [Caminada and Pigozzi, 2011].
Definition 3 (Illegal arguments). Let L be a labelling of
argumentation framework (Ar , def ) and let A ∈ Ar . We
say:
Definition 7 (Less or equally committed ). Let L1 and
L2 be two labellings of argumentation framework AF =
(Ar , def ). We say that L1 is less or equally committed as L2
(L1 L2 ) iff in(L1 ) ⊆ in(L2 ) and out(L1 ) ⊆ out(L2 ).
1. A is illegally in iff A is labelled in but not all its defeaters are labelled out
Definition 8 (Compatible labellings ≈). Let L1 and L2 be
two labellings of argumentation framework (Ar , def ). We
say that L1 is compatible with L2 (denoted as L1 ≈ L2 ) iff
in(L1 ) ∩ out(L2 ) = ∅ and out(L1 ) ∩ in(L2 ) = ∅.
The intuition is that, in order to be compatible, two labellings cannot have in − out conflicts. It can be noted that
is a partial order on labellings, whereas ≈ is not transitive.
It holds that if L1 L2 , then L1 ≈ L2 . We are now ready to
state the sceptical and credulous aggregation operators.
2. A is illegally out iff A is labelled out but it does not
have a defeater that is labelled in
3. A is illegally undec iff A is labelled undec but either all
its defeaters are labelled out or it has a defeater that is
labelled in.
1
For an explanation of why this is not a too restrictive assumption, the reader is referred to [Caminada and Pigozzi, 2011].
122
undec in all other cases. The admissibility problem reappears
here and is solved again by an iterative second phase where
all illegally in and out arguments are relabelled undec.
Definition 9 (Sceptical initial aggregation operator
sioAF ). Let Labellings be the set of all possible labellings of argumentation framework AF = (Ar , def ).
The sceptical initial aggregation operator is a function
sioAF : 2Labellings − {∅} → Labellings such that
sioAF ({L1 , . . . , Ln }) =
{(A, in) | ∀i ∈ {1, . . . , n} : Li (A) = in}∪
{(A, out) | ∀i ∈ {1, . . . , n} : Li (A) = out}∪
{(A, undec) | ∃i ∈ {1, . . . , n} : Li (A) = in ∧ ∃j ∈
{1, . . . , n} : Lj (A) = out}.
The idea is that the group initially labels an argument in
(resp. out) if all individual participants agree that the argument is in (resp. out). Otherwise it is undec. This procedure
does not preserve admissibility. This means that it may return
a labelling with illegally in or illegally out arguments. This
is why, after the initial aggregation, a second iterative phase
follows, where all the illegally in or out arguments are relabelled to undec. Formally, this is defined as follows:
Definition 13 (Credulous aggregation operator coAF ). Let
AdmLabellings be the set of all admissible labellings of argumentation framework AF = (Ar , def ). The credulous aggregation operator is a function coAF : 2AdmLabellings −
{∅} → AdmLabellings such that coAF ({L1 , . . . , Ln }) is
the down-admissible labelling of cioAF ({L1 , . . . , Ln }).
It holds that the credulous outcome labelling (Lco =
coAF ({L1 , . . . , Ln })) is compatible with all the individual
labellings, i.e. Lco ≈ Li (for each i ∈ {1, . . . , n}).
3
Preferences
In order to investigate Pareto optimality and strategyproofness we need to assume that agents have preferences
over the possible collective outcomes.
We write L ≥i L to denote that agent i prefers labelling L
to L . We write L ∼i L , and say that i is indifferent between
L and L , iff L ≥i L and L ≥i L. Finally, we write L >i L
(agent i strictly prefers L to L ) iff L ≥i L and not L ∼i L .
We assume that the labelling submitted by each agent is his
most preferred one and, hence, the one he would like to see
adopted by the whole group. The order over the other possible labellings is generated according to the distance from the
most preferred one. For this purpose, we now define Hamming sets and Hamming distance among labellings.
Definition 10 (Down-admissible labelling). Let L be a labelling of argumentation framework AF = (Ar , def ). The
down-admissible labelling of L is the biggest element of the
set of all admissible labellings that is less or equally committed than L.
The down-admissible labelling is defined according to the
partial order given by . Such element always exists and is
unique [Caminada and Pigozzi, 2011]. We can now define
the sceptical operator that ensures admissible outcomes.
Definition 14 (Hamming set ). Let L1 and L2 be two labellings of argumentation framework (Ar , def ). We define
the Hamming set between these labellings as L1 L2 = {A |
L1 (A) = L2 (A)}.
Definition 15 (Hamming distance ||). Let L1 and L2 be
two labellings of argumentation framework (Ar , def ). We
define the Hamming distance between these labellings as
L1 || L2 = |L1 L2 |.
Definition 11 (Sceptical aggregation operator soAF ). Let
Labellings be the set of all labellings of argumentation
framework AF = (Ar , def ). The sceptical aggregation operator is a function soAF : 2Labellings − {∅} → Labellings
such that soAF ({L1 , . . . , Ln }) is the down-admissible labelling of sioAF ({L1 , . . . , Ln }).
The aggregation operator above produces social outcomes
that are less or equally committed to all the individual labellings. This result is also maximal:
The Hamming set is the set of arguments on which two
labellings differ, whereas the Hamming distance is the number of arguments on which two labellings differ. Since the
labellings have only three values, we can use this lemma.
Theorem 1. Let L1 , . . . , Ln (n ≥ 1) be labellings of argumentation framework AF = (Ar , def ). Let Lso be
soAF ({L1 , . . . , Ln }). It holds that Lso is the biggest admissible labelling such that for every i ∈ {1, . . . , n}: Lso Li .
Lemma 1. Let (Ar , def ) be an argumentation framework
and L1 and L2 two labellings:
The second aggregation operator is the credulous one.
a) L1 L2 = in(L1 )∩out(L2 )∪in(L1 )∩undec(L2 )∪
out(L1 )∩in(L2 )∪out(L1 )∩undec(L2 )∪undec(L1 )∩
in(L2 ) ∪ undec(L1 ) ∩ out(L2 )
b) if L1 L2 then L1 L2 = undec(L1 ) ∩ dec(L2 )
Definition 12 (Credulous initial aggregation operator
cioAF ). Let Labellings be the set of all possible labellings of argumentation framework AF = (Ar , def ).
The credulous initial aggregation operator is a function
cioAF : 2Labellings − {∅} → Labellings such that
cioAF ({L1 , . . . , Ln }) =
{(A, in) | ∃i ∈ {1, . . . , n} : Li (A) = in ∧ ¬∃j ∈
{1, . . . , n} : Lj (A) = out}∪
{(A, out) | ∃i ∈ {1, . . . , n} : Li (A) = out ∧ ¬∃j ∈
{1, . . . , n} : Lj (A) = in}∪
{(A, undec) | ∀i ∈ {1, . . . , n} : Li (A) = undec ∨ (∃j ∈
{1, . . . , n} : Lj = in ∧ ∃k ∈ {1, . . . , n} : Lk = out)}.
The idea is that the group initially labels an argument A in
(resp. out) if there is someone who believes A is in (resp.
out) and nobody thinks A is out (resp. in). A is labelled
c) if L1 ≈ L2 then L1 L2 = dec(L1 ) ∩ undec(L2 ) ∪
undec(L1 ) ∩ dec(L2 )
Proof.
a) Follows from the fact that in(L), out(L) and
undec(L) partition the domain of any labelling L.
b) and c) are obtained by eliminating the empty sets in
a) and replacing in(L) ∪ out(L) by dec(L).
We are now ready to define an agent’s preference given by
the Hamming set and the Hamming distance as follows.
123
Definition 16 (Hamming set based preference ≥i, ). Let
(Ar , def ) be an argumentation framework, Labellings the
set of all its labellings and ≥i the preference of agent i. We
say that agent i’s preference is Hamming set based (written as
≥i, ) iff ∀L, L ∈ Labellings, L ≥i L ⇔ L Li ⊆ L Li
where Li is the agent’s most preferred labelling.
A labelling is Pareto optimal if it is not dominated by any
other labelling.
Definition 19 (Pareto optimality). Labelling L is Pareto optimal if there is no L = L such that ∀i ∈ N , L ≥i L and
∃j ∈ N, L >j L.
We say that an aggregation operator is Pareto optimal if
all its outcomes are Pareto optimal. In particular, candidates
for dominance are admissible and less or equally committed
labellings in the case of the sceptical operator, and compatible
labellings in the case of the credulous operator.
Theorem 2. If individual preferences are Hamming set
based, then the sceptical aggregation operator is Pareto optimal when choosing from the admissible labellings that are
smaller or equal (w.r.t ) to each of the participants’ individual labellings.
Proof. Let P be a profile of admissible labellings, LSO =
soAF (P ) and LX some admissible labelling with the property ∀i ∈ N, LX Li . From Theorem 1 we know that
LSO is the biggest admissible labelling with such property,
so LX LSO . So ∀i ∈ N, LX LSO Li . From Lemma
2 we have LSO ≥i LX for any i. So no agent strictly prefers
LX and hence there is no labelling that dominates LSO .
Definition 17 (Hamming distance based preference
≥i,|| ). Let (Ar , def ) be an argumentation framework,
Labellings the set of all its labellings and ≥i the preference
of agent i. We say that agent i’s preference is Hamming distance based (written as ≥i,|| ) iff ∀L, L ∈ Labellings, L ≥i
L ⇔ L || Li ≤ L || Li where Li is the agent’s most preferred labelling.
The Hamming set based preference yields a partial order,
whereas the Hamming distance based preference yields a total preorder. We now prove two lemmas establishing the relations between less or equally committed labellings and Hamming set/distance based preferences over labellings.
Lemma 2. Let L, L and Li be three labellings such that
L L Li . If Li is the most preferred labelling of agent
i and his preference is Hamming set or Hamming distance
based, then L ≥i, L and L ≥i,|| L respectively.
Theorem 3. If individual preferences are Hamming distance
based, then the sceptical aggregation operator is Pareto optimal when choosing from the admissible labellings that are
smaller or equal (w.r.t ) to each individual labellings.
Proof. In the proof of Theorem 2 Hamming set may be
replaced by Hamming distance because it is only used in
Lemma 2, which works for Hamming distance as well.
Proof. From L L , we have that dec(L) ⊆ dec(L ),
which is equivalent to undec(L ) ⊆ undec(L) because
undec is the complement of dec. From this it follows that
undec(L ) ∩ dec(Li ) ⊆ undec(L) ∩ dec(Li ). Since L Li
and L Li (by assumption and transitivity of ), we can
use Lemma 1b to obtain L Li ⊆ L Li . By definition we
have that L ≥i, L and L ≥i,|| L.
Theorem 4. If individual preferences are Hamming set
based, then the credulous aggregation operator is Pareto optimal when choosing from the admissible labellings that are
compatible (≈) to each of the participants’ labellings.
Proof. Let P be a profile of admissible labellings, LCO =
coAF (P ), LCIO = cioAF (P ). Assume by contradiction that
there exists some admissible labelling LX with the property
∀i ∈ N, LX ≈ Li that dominates LCO .
First notice that compatibility ensures that there are no
in/out conflicts between LX and LCO . If there is a conflict between agents’ labellings on some argument, then both
LX and LCO need to label it undec. If there exists an agent
whose labelling decides on some argument and other agents’
labellings agree or retain from decision, LCO and LX also
agree or retain from decision. If all agents retain from decision on some argument, LCO by definition also retains, and
LX may label freely. Let us take A ∈ dec(LX ). Then, there
needs to be an agent with a labelling that agrees on A. Otherwise all agents’ labellings would be undecided on such argument and, according to definition, LCO would not decide
either. But then all agents’ labellings will agree on such argument with LCO and disagree with LX , so no agent will
strongly prefer LX , which contradicts with domination. So
there exists at least one agent whose labelling agrees with
LX on A. Other agents’ labellings also need to agree on A
or label it undec because of the compatibility of LX . Then
by definition LCIO (A) = LX (A). This holds for any argument A ∈ dec(LX ), so we have LX LCIO . But LX is
Lemma 3. Let L, L and Li be three labellings and let
L Li . If Li is the most preferred labelling of agent i,
his preference is Hamming set based and L ≥i, L, then
L L .
Proof. L ≥i, L implies L Li ⊆ L Li which implies
L(A) = Li (A) ⇒ L (A) = Li (A) for any argument A
(i). L Li implies L(A) = Li (A) for any A ∈ dec(L)
(ii). From (i) and (ii) it follows that L(A) = L (A) for any
A ∈ dec(L). Hence L L .
We now have the machinery to represent individual preferences over the collective outcomes. We can now turn to the
first research question of the paper, i.e., whether the sceptical
and credulous aggregation operators are Pareto optimal.
4
Pareto optimality
Pareto optimality is a fundamental social welfare principle
that guarantees that it is not possible to improve a social outcome, i.e. it is not possible to make one individual better off
without making at least one other person worse off. In order
to address the question of whether the sceptical and the credulous aggregation operators are Pareto optimal, we first need
to define when a labelling Pareto dominates another labelling.
Definition 18 (Pareto dominance). Let N = 1, . . . , n be a
group of agents with preferences ≥i , i ∈ N . L Pareto dominates L if ∀i ∈ N , L ≥i L and ∃j ∈ N, L >j L .
124
Definition 20 (Strategic lie). Let P be a profile and Lk ∈ P
the most preferred labelling of an agent with preference ≥k .
Let O be any aggregation operator. A labelling Lk such that
O(PLk /Lk ) >i O(P ) is called a strategic lie .
admissible and, by Theorem 1, LCO is the biggest admissible labelling less or equally committed as LCIO . So we have
LX LCO LCIO . LX must be different from LCO to
dominate it. Let A be an argument on which these labellings
differ. From the previous it follows that A ∈ undec(LX ) and
A ∈ dec(LCO ). LCO decides on an argument only if there
exists an agent that decides on such argument. But then this
agent will agree on A with LCO and disagree with LX , so it
will not prefer LX . This is in contradiction with dominance.
Hence, such dominating labelling cannot exist.
Observation 1. The credulous aggregation operator is not
Pareto optimal when the preferences are Hamming distance
based. In Fig. 2 both labellings LCO and LX are compatible
with both L1 and L2 , but LX is closer when applying Hamming distance. L1 LCO = L2 LCO = {A, B, E, F, G},
so Hamming distance is 5, whereas L1 LX = L2 LX =
{A, B, C, D}, so Hamming distance is 4.
Definition 21 (Strategy-proof operator). An aggregation
operator O is strategy-proof if strategic lies are not possible.
Figure 3: The credulous operator is not strategy-proof.
Observation 2. The credulous aggregation operator is not
strategy-proof. In Fig.3 the agent with labelling L2 can insincerely report L2 to obtain his preferred labelling. This makes
an agent with labelling L1 worse off. The example is valid for
both Hamming set and Hamming distance based preferences.
Figure 4: The sceptical operator is not strategy-proof.
Observation 3. The sceptical aggregation operator is not
strategy-proof. Consider the three labellings in Fig. 4. Labelling L1 of agent 1 when aggregated with L2 gives labelling
L3 , which differs on all three arguments. But, when the agent
strategically lies and reports labelling L2 instead, the result
of the aggregation is the same labelling L2 , which differs
only on two arguments {A, B}. The example is valid for both
Hamming set and Hamming distance based preferences.
Figure 2: The credulous aggregation operator is not Pareto
optimal under Hamming distance based preferences.
We summarise our results in Table 1. We are now ready to address the second research question of the paper, i.e., whether
the credulous and sceptical operators are manipulable.
Hamming set
Hamming dist.
Sceptical Operator
Yes (Theorem 2)
Yes (Theorem 3)
Surprisingly, however, this lie does not harm the other
agent. On the contrary, it improves the social outcome for
both the agents. In order to study this kind of situation, we
now introduce the distinction between malicious and benevolent lies.
Credulous Operator
Yes (Theorem 4)
No (Observation 1)
Definition 22 (Malicious lie). Let O be some aggregation
operator and P a profile. We say that a strategic lie Lk is
malicious iff, for some agent j = k, O(P ) >j O(PLk /Lk ).
Table 1: Pareto optimality of the aggregation operators depending on the type of preference.
5
Definition 23 (Benevolent lie). Let O be some aggregation
operator and P a profile. We say that a strategic lie Lk is
benevolent iff, for any agent i O(PLk /Lk ) ≥i O(P ) and there
exists an agent j = k, O(PLk /Lk ) >j O(P ).
Theorem 5. Consider the sceptical aggregation operator and
Hamming set based preferences. For any agent, his strategic
lies are benevolent.
Strategic manipulation
When an agent knows the positions of the other agents, he
may have an incentive to submit an insincere position. If an
aggregation rule is manipulable, an agent may obtain a social
outcome that is closer to his actual preferences by submitting
an insincere input. Hence, an important question to address
when dealing with aggregation procedures is to study whether
they are strategy-proof (i.e. non-manipulable).
In order to talk about manipulability, we first need to denote a profile in which a labelling has been changed. We
recall that by profile we refer to a set of individual labellings
{L1 , . . . , Ln }. Profile PLk /Lk is profile P where agent k’s
labelling Lk has been changed to Lk .
Proof. Let P be a profile, and Lk a strategic lie of agent k.
Denote LSO = soAF (P ) and LSO = soAF (PLk /Lk ). Agent
k’s preference is LSO >k LSO (i). We will show that for
any agent i = k, we have LSO >i LSO . Since the sceptical
aggregation operator produces social outcomes that are less
or equally committed to all the individual labellings, we have
that LSO Li for all i = k (ii). Similarly, we have LSO 125
Lk (iii). From (i) and (iii), by Lemma 3, we have that LSO LSO (iv). From (iv) and (ii) we have LSO LSO Li for
all i = k. Finally, we can apply Lemma 2 to obtain LSO ≥i
LSO for all i = k (v). We showed that lie is not malicious,
now we show that it is benevolent. (iii) implies undec(Lk ) ⊆
undec(LSO ) (vi). (i) and (vi) implies ∃A ∈ dec(Lk ) : A ∈
undec(LSO ) ∧ A ∈ dec(LSO ) (vii). From (vii), (ii) and (v)
LSO >i LSO for i = k.
6
The study of aggregation problems in abstract argumentation
is recent. [Coste-Marquis et al., 2007] present an approach
to merge Dung’s argumentation frameworks. The argumentation frameworks to be merged may be different, that is agents
may ignore arguments put forward by other agents.
Given an argumentation framework, [Rahwan and Tohmé,
2010] address the question of how to aggregate individual labellings into a collective position. By drawing on a general
impossibility theorem from judgment aggregation, they prove
an impossibility result and provide some escape solutions.
Relevant for the present paper is another work by [Rahwan
and Larson, 2008], where they explore welfare properties of
collective argument evaluation.
In this paper we have analyzed the sceptical and credulous aggregation operators from a social welfare perspective.
We have studied under which conditions these operators are
Pareto optimal and whether they are manipulable. In future,
we plan to consider focal set oriented agents, that is, agents
who care only about a subset of the argumentation framework. We also plan to investigate distances that assign higher
values to in-out conflicts than to in-undec or out-undec.
Theorem 6. Consider the sceptical aggregation operator and
Hamming distance based preferences. For any agent, his
strategic lies are benevolent.
Proof. Let P be a profile, and Lk a strategic lie of agent
k whose most preferred labelling is Lk . Denote LSO =
soAF (P ) and LSO = soAF (PLk /Lk ). We will show that,
if LSO is strictly preferred to LSO by agent k, then it is also
strictly preferred by any other agent. Without loss of generality we can take agent j, j = k,whose most preferred labelling
is Lj . Let us partition the arguments into the following disjoint groups: A = dec(LSO )\dec(LSO ) (decided arguments
that became undecided), B = dec(LSO ) \ dec(LSO ) (undecided arguments that became decided), C = dec(LSO ) ∩
dec(LSO ) (arguments decided in both labellings), D =
undec(LSO ) ∩ undec(LSO ) (arguments undecided in both
labellings). Labellings LSO and LSO agree on the arguments
in D (which are labeled undec) and C, whose arguments
are labeled in or out. On the arguments in C there are no
in − out conflicts between LSO and LSO as the sceptical aggregation operator guarantees social outcomes less or equally
committed than Lj . Therefore, only arguments from A and
B have an impact on Hamming distance. Both labellings Lk
and Lj agree with LSO on the arguments in A because LSO
decides on those arguments and is less or equally committed
than both labellings. LSO remains undecided on the arguments in A so both labellings Lk and Lj disagrees with LSO
on A. LSO is less or equally committed than Lj so, as above,
we obtain that on the arguments in B, Lj agrees with LSO
and disagrees with LSO . On the contrary, LSO does not have
to be less or equally committed than Lk and so, for agent k,
some of the arguments from B increase the distance and some
of them decrease it. If agent k prefers LSO to LSO , then
the number of the arguments decreasing the distance must be
greater than the number of those increasing by more than |A|.
But for agent j all the arguments from B are decreasing the
distance, as Lj agrees with LSO on the whole B. So, if agent
k gains by switching to labelling LSO , agent j needs to gain
by at least the same.
References
[Brams et al., 2007] S Brams, M Kilgour, and R Sanver.
A minimax procedure for electing committees. Public
Choice, 132(3-4):401–420, 2007.
[Caminada and Pigozzi, 2011] M. Caminada and G. Pigozzi.
On judgment aggregation in abstract argumentation. Autonomous Agents and Multi-Agent Systems, 22(1):64–102,
2011.
[Caminada, 2006] M.W.A. Caminada. On the issue of reinstatement in argumentation. In JELIA 2006, pages 111–
123. Springer, 2006. LNAI 4160.
[Coste-Marquis et al., 2007] S. Coste-Marquis, C. Devred,
S. Konieczny, M.-C. Lagasquie-Schiex, and P. Marquis.
On the merging of Dung’s argumentation systems. Artificial Intelligence, 171(10-15):730–753, 2007.
[Dung, 1995] P. M. Dung. On the acceptability of arguments
and its fundamental role in nonmonotonic reasoning, logic
programming and n-person games. Artificial Intelligence,
77:321–357, 1995.
[Konieczny and Perez, 1998] S. Konieczny and R. Pino
Perez. On the logic of merging. In Proceedings of KR’98,
pages 488–498, 1998.
[Pigozzi, 2006] G. Pigozzi. Belief merging and the discursive dilemma: an argument-based account to paradoxes of
judgment aggregation. Synthese, 152(2):285–298, 2006.
[Rahwan and Larson, 2008] I. Rahwan and K. Larson. Welfare properties of argumentation-based semantics. In Proceedings of the 2nd International Workshop on Computational Social Choice (COMSOC), 2008.
[Rahwan and Tohmé, 2010] I. Rahwan and F. Tohmé. Collective argument evaluation as judgment aggregation. In
Proc. of 9th AAMAS, 2010.
We summarise our results in Table 2.
Hamming set
Hamming dist.
Sceptical
No (Obs. 3)
but benev. (Th. 5)
No (Obs. 3)
but benev. (Th. 6)
Conclusion and related work
Credulous
No, and
not benev. (Obs. 2)
No, and
not benev. (Obs. 2)
Table 2: Strategy-proofness of operators depending on the
type of preference.
126
Download