A Computational Logic Approach to the Belief-Bias Effect in

advertisement
A Computational Logic Approach to
The Belief-Bias Effect in Human Reasoning
Emmanuelle Dietz
International Center for Computational Logic
TU Dresden, Germany
Luı́s Moniz Pereira
Centro de Inteligência Artificial (CENTRIA)
Universidade Nova de Lisboa, Portugal
The Belief-Bias Effect
The two minds hypothesis distinguishes between:
I
the reflective mind, and
I
the intuitive mind
This hypothesis is supported by showing the belief-bias effect (Evans [2012]):
I
The belief-bias effect is the conflict between the reflective and intuitive minds
when reasoning about problems involving real-world beliefs. It is the tendency to
accept or reject arguments based on own beliefs or prior knowledge rather than
on the reasoning process.
How to identify the belief-bias effect?
I
Do psychological studies about deductive reasoning which demonstrate
possibly conflicting processes in the logical and psychological level.
The Syllogisms Task
Evans et al. [1983] carried out an experiment where participants were presented
different syllogisms for which they should decide whether they were logcially valid.
Type
Example
Evans
Sdogs
valid and
believable
No police dogs are vicious.
Some higly trained dogs are vicious.
Therefore, some highly trained dogs are not police dogs.
Svit
valid and
unbelievable
No nutritional things are inexpensive.
Some vitamin tablets are inexpensive.
Therefore, some vitamin tablets are not nutritional.
56%
Sadd
invalid and
believable
No addictive things are inexpensive.
Some cigarettes are inexpensive.
Therefore, some addictive things are not cigarettes.
71%
Srich
invalid and
unbelievable
No millionaires are hard workers.
Some rich people are hard workers.
Therefore, some millionaires are not rich people.
10%
89%
Using their reflexive minds, people read the instructions and understand that they are
required to reason logically from the premises to the conclusion. However, when they
look at the conclusion, their intuitive minds deliver strong tendency to say yes or no
depending on whether it is believable.
Adequate Framework for Human Reasoning
How to adequately formalize human reasoning in computational logic?
Stenning and van Lambalgen (2008) propose a two step process:
human reasoning should be modeled by
1. reasoning towards an appropriate representation,
→ conceptual adequacy
2. reasoning with respect to this representation.
→ inferential adequacy
The adequacy of a computational logic approach that aims a representing human
reasoning should be evaluated based on how humans actually reason.
State of the Art
1. Stenning and van Lambalgen (2008) formalize Byrne’s (1989) Suppression Task,
where people suppress valid inferences when additional arguments get available.
2. Kowalski (2011) models Wason’s (1968) Selection Task, showing that people have
a matching bias, the tendency to select explicitely named values in conditionals.
3. Hölldobler and Kencana Ramli (2009a) found some technical mistakes done by
Stenning and van Lambalgen and propose to model human reasoning by
I
I
I
logic programs
under weak completion semantics
based on the three-valued Lukasiewicz (1920) logic.
This approach seems to adequately model the Suppression and the Selection Task.
Can we adequately model the syllogisms task including the
belief-bias effect under weak completion semantics?
Weak Completion Semantics
First-Order Language
A (first-order) logic program P is a finite set of clauses of the form
p(X ) ← a1 (X ) ∧ · · · ∧ an (X ) ∧ ¬b1 (X ) ∧ · · · ∧ ¬bm (X )
I
X is a variable and p, a1 , . . . , an and b1 , . . . , bm are predicate symbols.
I
p(X ) is an atom and head of the clause.
I
a1 (X ) ∧ · · · ∧ an (X ) ∧ ¬b1 (X ) ∧ · · · ∧ ¬bm (X ) is a formula and body of the clause.
I
p(X ) ← > and p(X ) ← ⊥ denote positive and negative facts, respectively.
I
Variables are written with upper case and constants with lower case ones.
I
A ground formula is a formula that does not contain variables.
I
A ground instance results from substituting all occurrences of
each variable name by some constant of P.
I
A ground program g P is comprised of all ground instances of its clauses.
I
The set of all atoms in g P is denoted by atoms(P).
I
An atom is undefined in g P if it is not the head of some clause in g P.
The corresponding set of these atoms is denoted by undef(P).
The (Weak) Completion of a Logic Program
Consider the following transformation for a given P:
1. Replace all clauses in g P with the same head (ground atom)
A ← body1 , A ← body2 , . . . by the single expression A ← body1 ∨ body2 , ∨ . . . .
2. if A ∈ undef(g P) then add A ← ⊥.
3. Replace all occurrences of ← by ↔.
The resulting set of equivalences is called the completion of P (Clark [1978]).
If Step 2 is omitted, then the resulting set is called
the weak completion of P (wc P) (Hölldobler and Kencana Ramli [2009a,b]).
Three-Valued Lukasiewicz Logic
>
⊥
U
¬
∧
>
U
⊥
∨
>
U
⊥
⊥
>
U
>
U
⊥
>
U
⊥
U
U
⊥
⊥
⊥
⊥
>
U
⊥
>
>
>
>
U
U
>
U
⊥
←L
>
U
⊥
↔L
>
U
⊥
>
U
⊥
>
U
⊥
>
>
U
>
>
>
>
U
⊥
>
U
⊥
U
>
U
⊥
U
>
Table: >, ⊥, and U denote true, false, and unknown, respectively.
An interpretation I of P is a mapping of the Herbrand base BP to {>, ⊥, U} and is
represented by an unique pair, hI > , I ⊥ i, where
I > = {x ∈ BP | x is mapped to >} and I ⊥ = {x ∈ BP | x is mapped to ⊥}
I
For every I it holds that I > ∩ I ⊥ = ∅.
I
A model of a formula F is an interpretation I such that F is true under I .
I
A model of P is an interpretation that is a model of each clause in P.
Reasoning in an Appropriate Logical Form
Positive Encoding for Negative Conclusions
Premise 1 in our first case No police dogs are vicious is equivalent to
There does not exist an X such that police dogs(X) and vicious(X).
We can write it as
For all X if police dogs(X) and not abnormal then not vicious(X).
By default, nothing is abnormal.
We use abnormality predicates to implement conditionals by licenses for implications
(Stenning and van Lambalgen [2008]).
Problem: we only consider logic programs that allow positive heads in the clauses.
We introduce p 0 (X ) and p 0 (X ) ← ¬p(X ) for every negative conclusion ¬p(X )1 :
Premise 1
1
police dogs 0 (X )
police dogs(X )
ab1a (X )
←
←
←
vicious(X ) ∧ ¬ab1a (X )
¬police dogs 0 (X ),
⊥.
More generally, we need an appropriate dual program like transformation when there are several positive rules.
Commonsense Implications within the four Syllogisms
Example
Commonsense Implication
Sdogs
No police dogs are vicious.
Some highly trained dogs are vicious.
Therefore, some highly trained dogs are not police dogs.
We generally assume
that police dogs
are highly trained.
Svit
No nutritional things are inexpensive.
Some vitamin tablets are inexpensive.
Therefore, some vitamin tablets are not nutritional.
The purpose of
vitamin tablets
is to aid nutrition.
Sadd
No addictive things are inexpensive.
Some cigarettes are inexpensive.
Therefore, some addictive things are not cigarettes.
We know that
cigarettes
are addictive.
Srich
No millionaires are hard workers.
Some rich people are hard workers.
Therefore, some millionaires are not rich people.
By definition,
millionaires are rich.
The sceond premises in each case contain some background knowledge
which might provide the motivation on whether to validate the syllogisms.
Modeling Syllogism Sdogs : valid argument, believable conclusion
Premise 1
Premise 2
Conclusion
No police dogs are vicious.
Some higly trained dogs are vicious.
Therefore, some highly trained dogs are not police dogs.
Premise 2 states facts about, lets say some a. The program for Sdogs is:
Pdogs = {police dogs 0 (X ) ← vicious(X ) ∧ ¬ab1a (X ),
police dogs(X ) ← ¬police dogs 0 (X ), ab1a (X ) ← ⊥}
∪ {highly trained(a) ← >, vicious(a) ← >}
∪ {highly trained(X ) ← police dogs(X ) ∧ ¬ab1b (X ), ab1b (X ) ← ⊥}
The corresponding weak completion is:
wc g Pdogs = {police dogs 0 (a) ↔ vicious(a) ∧ ¬ab1 (a),
police dogs(a) ↔ ¬police dogs 0 (a), ab1 (a) ↔ ⊥,
highly trained(a) ↔ > ∨ (police dogs(a) ∧ ¬ab1b (a)),
vicious(a) ↔ >, ab1b (a) ↔ ⊥}
How do we compute the intended model?
Reasoning with Respect to this Representation
Reasoning with Respect to Least Models
Hölldobler and Kencana Ramli propose to compute the least model of the weak
completion of P (lmL wc P) which is identical to the least fixed point of ΦP , an
operator defined by Stenning and van Lambalgen [2008].
Let I be an interpretation in ΦP (I ) = hJ > , J ⊥ i, where
J > = {A | there exists A ← body ∈ P with I (body ) = >},
J ⊥ = {A | there exists A ← body ∈ P and
for all A ← body ∈ P we find I (body ) = ⊥}.
Hölldobler and Kencana Ramli showed that the model intersection property holds for
weakly completed programs. This guarantees the existence of least models for every P.
Computing the Least Model for Pdogs
wc g Pdogs
=
{police dogs 0 (a) ↔ vicious(a) ∧ ¬ab1a (a),
police dogs(a) ↔ ¬police dogs 0 (a), ab1a (a) ↔ ⊥,
highly trained(a) ↔ > ∨ (police dogs(a) ∧ ¬ab1b (a)),
vicious(a) ↔ >, ab1b (a) ↔ ⊥}
Let’s start with interpretation I0 = h∅, ∅i:
I1 = Φg Pdogs (I0 )=h{vicious(a), highly trained(a)}, {ab1a (a), ab1b (a)}i
I2 = Φg Pdogs (I1 )=h{vicious(a), highly trained(a), police dogs 0 (a)}, {ab1a (a), ab1b (a)}i
I3 = Φg Pdogs (I2 )=h{vicious(a), highly trained(a), police dogs 0 (a)},
{ab1 (a), ab1b (a), police dogs(a)}i
I4 = Φg Pdogs (I3 )=Φg Pdogs (I2 ) ⇐ lmL wc (g Pdogs (I2 ))
This model confirms Sdogs , because a is a highly trained dog and not a police dog.
Modeling Syllogism Svit : valid argument, unbelievable conclusion
Premise 1
Premise 2
Conclusion
No nutritional things are inexpensive.
Some vitamin tablets are inexpensive.
Therefore, some vitamin tablets are not nutritional.
Premise 2 states facts about, lets say some a. The program for Svit is:
Pvit = {nutritional 0 (X ) ← inexp(X ) ∧ ¬ab2a (X ),
nutritional(X ) ← ¬nutritional 0 (X ), ab2a (X ) ← ⊥}
∪ {vitamin(a) ← >, inexp(a) ← >} ∪ {ab2a (X ) ← vitamin(X ),
inexp(X ) ← vitamin(X ) ∧ ¬ab2b (X ), ab2b (X ) ← ⊥}
The least model of the weak completion of g Pvit is
lmL wc Pvit = h{vitamin(a), inexp(a), nutritional(a), ab2a (a)}, {nutritional 0 (a), ab2b (a)}i
This model does not validate Svit and this confirms how the participants responded.
Modeling Syllogism Sadd : invalid argument, believable conclusion (1)
Premise 1
Premise 2
Conclusion
No addictive things are inexpensive.
Some cigarettes are inexpensive.
Therefore, some addictive things are not cigarettes.
Premise 2 states facts about, lets say some a. The program for Sadd is:
Padd = {addictive 0 (X ) ← inexp(X ) ∧ ¬ab3a (X ),
addictive(X ) ← ¬addictive 0 (X ), ab3a (X ) ← ⊥}
∪ {cigarettes(a) ← >, inexp(a) ← >} ∪ {ab3a (X ) ← cigarettes(X ),
inexp(X ) ← cigarettes(X ) ∧ ¬ab3b (X ), ab3b (X ) ← ⊥}
The least model of the weak completion of g Padd is:
lmL wc Padd = h{cigarettes(a), inexp(a), addictive(a), ab3a (a)}, {addictive 0 (a), ab3b (a)}i
The Conclusion of Sadd states something that can obviously not be about a.
Abductive Reasoning
Definition (1)
wc i be an abductive framework, where:
Let hP, AP , |=lm
L
I
P is a knowledge base,
I
AP a set of abducibles consisting of the
(positive and negative) facts for each undefined atom in P
I
wc a logical consequence relation where
|=lm
L
wc F if and only if lm wc P(F ) = > for formula F .
P |=lm
L
L
Let O be an observation and E be an explanation where O and E are consistent:
I
wc O
O is explained by E given AP and P iff P ∪ E |=lm
L
wc O and E ⊆ A .
where P ∪ E is consistent, P 6|=lm
P
L
We distinguish between two forms:
I
F follows skeptically from P and O iff O can be explained,
wc O,
and for all minimal explanations E it holds that P ∪ E |=lm
L
I
F follows credulously from P and O iff
wc O.
there exists a minimal explanation E such that P ∪ E |=lm
L
In the following we apply abduction with respect to credulous reasoning: to validate
Sadd we need one model, which contains something addictive that is not a cigarette.
Modeling Syllogism Sadd : invalid argument, believable conclusion (2)
We observe for something else, e.g. for b, that it is addictive:
Oadd = {addictive(b) ← >}.
The set of abducibles for given g (Padd ∪ Oadd ), is:
Ag (Padd ∪Oadd ) = {cigarettes(b) ← >, cigarettes(b) ← ⊥}
We have the following two minimal explanations for Oadd :
Ecig = {cigarettes(b) ← >} and E¬cig = {cigarettes(b) ← ⊥}
The least models of the weak completion of Padd together with each explanation are:
lmL wc g (Padd ∪ E¬cig ) = h{addictive(b)}, {addictive 0 (b), cigarettes(b),
ab3a (b), inexpensive(b), ab3b (b)}i
lmL wc g (Padd ∪ Ecig ) = h{addictive(b), ab3a (b), cigarettes(b),
inexpensive(b)}, {addictive 0 (b), ab3b (b)}i
Credulously, we conclude there exists some b that is addictive but not a cigarette.
Modeling Syllogism Srich : invalid argument, unbelievable conclusion (1)
Premise 1
Premise 2
Conclusion
No millionaires are hard workers.
Some rich people are hard workers.
Therefore, some millionaires are not rich people.
The program for Srich is:
Prich = {millionaire 0 (X ) ← hard worker (X ) ∧ ¬ab4a (X ),
millionaire(X ) ← ¬millionaire 0 (X ), ab4a (X ) ← ⊥}
∪ {rich(a) ← >, hard worker (a) ← >}
∪ {rich(X ) ← millionaire(X ) ∧ ¬ab4b (X ), ab4b (X ) ← ⊥}
Again, what is talked about in Premise 2 cannot be the same as in the Conclusion.
Modeling Syllogism Srich : invalid argument, unbelievable conclusion (2)
We observe for something else, e.g. for b, that is a millionaire:
Omil = {millionaire(b) ← >}
.
For g (Prich ∪ Omil ) we have the following set of abducibles:
Ag (Prich ∪Omil ) = {hard worker (b) ← >, hard worker (b) ← ⊥}
The least model of the weak completion of Prich and E¬hard
lmL wc g (Prich ∪ E¬hard
worker )
=
worker
is:
h{millionaire(b), rich(b)}, {ab4a (b), ab4b (b),
millionaire 0 (b), hard worker (b)}i
This model does not validate Srich and this confirms how the participants responded.
Different kinds of Abnormalities: the Suppression and the Cigarette Task
If she has an essay to finish, then she goes to the library.
If the library is open, then she goes to the library.
She has an essay to finish.
Pe+Add
= {l ← e ∧ ¬ab1 , ab1 ← ¬o, l ← o ∧ ¬ab2 , ab2 ← ¬e, e ← >}
lmL wc Pe+Add = h{e}, {ab2 }i
We cannot conclude whether she goes to the library.
No addictive things are inexpensive.
Some cigarettes are inexpensive.
There are some addictive things. (Oadd = {addictive(b) ← >})
We have two explanations for Oadd which have equal priority:
Ecig = {cigarettes(b) ← >}
E¬cig = {cigarettes(b) ← ⊥}
However, instead of assuming that b is a cigarette, we would first prefer to conclude
that b is not inexpensive, except if we have more information about b.
Abductive Reasoning with Contextual Side-Effects
Contextual Side Effects
We define two forms of contextual side effects:
Definition (2)
Given a background knowledge P, two observations O1 and O2 and two explanations
E1 and E2
O2 is a necessary contextual side-effect of explained O1 iff for every explanation E1
such that O1 is explained by E1 given P, there exists E2 such that
O2 is explained by E2 given P, and E2 contains E1 .
O2 is a strict necessary contextual side-effect of explained O1 iff E2 = E1 .
Definition (3)
Given a background knowledge P, two observations O1 and O2 and two explanations
E1 and E2
O2 is a possible contextual side-effect of explained O1 iff there exist explanations E1
and E2 such that O1 is explained by E1 given P, and O2 is explained
by E2 given P, and E2 contains E1 .
O2 is a strict possible contextual side-effect of explained O1 iff E2 = E1 .
Definitions (2) and (3) correspond to skeptical and credulous reasoning, respectively.
Contextual Side-Effects: Sadd
No addictive things are inexpensive.
Some cigarettes are inexpensive.
b is addictive. (Oadd = {addictive(b) ← >})
Assume, we additionally observe Oinexp = {inexp(b) ← >}. We can only explain it by:
Ecig = {cigarettes(b) ← >}
Oadd is a strict necessary contextual side-effect of explained Oinexp , given Padd . But
Oinexp is a strict possible contextual side-effect of explained Oadd given Padd as well.
We didn’t specify which is the side-effect resulting from the primary explanation!
Inspection Points
Pereira and Pinto [2011] distinguish between occurrences of abducibles which allow
them to be produced and those which may only be consumed, if produced elsewhere2 .
First, we introduce a reserved (meta-)predication for every abductive ground atom:
UP = {ud(A) ← >|A ∈ undef(P)} ∪ {ud(A) ← ⊥|A 6∈ undef(P)}
For every (abducible) atom A we introduce the two following clauses:
inspect(A) ← A ∧ ¬ud(A)
inspect¬ (A) ← ¬A ∧ ¬ud(A)
The program containing all inspect and inspect¬ clauses for every such atom in P is:
IP = {inspect(A) ← A ∧ ¬ud(A)), inspect¬ (A) ← ¬A ∧ ¬ud(A)) | A ∈ atoms(P)}
2
More generally, certain occurrences may only be used in the abductive context of some other ones.
Modeling Syllogism Sadd with Inspect
Let us consider Sadd again: we prefer to conclude that cigarettes(X ) is false for any
variable if not stated otherwise. We modify our original program accordingly:
insp
Padd
=
UPadd ∪ IPadd ∪ Padd
\({ab3a (X ) ← cigarettes(X )}) ∪
{ab3a (X ) ← inspect(cigarettes(X ))}
If we only observe Oadd , we cannot abduce Ecig but E¬cig and b is not inexpensive!
However, if we additionally observe Oinexp = {inexpensive(b) ← >}, Ecig can be
abduced by the following clause in
insp
Padd
:
{. . . , inexpensive(b) ← cigarettes(b) ∧ ¬ab3b (b), . . . }
Now, b is addictive, inexpensive and a cigarette!
insp
Oadd is a strict necessary contextual side-effect of explained Oinexp given Padd
.
Conclusion and Future Work
Weak completion semantics
We have examined how syllogisms and the belief-bias effect
can be modeled by using abnormality predicates and abduction.
Inspection Points
We propose to introduce inspect predicates to deal with
various kinds of abnormalities in human reasoning which
allow different treatements of abducibles within conditionals.
Future Work
We intend to investigate other applications within human
reasoning studies that might confirm or refine our definitions.
We are currently working on extensions for:
1. Contextual Relevant Consequences
2. Contestable Contextual Side-effects
3. Contextual Counterfactual Side-effects
Thank you very much for your attention!
References
R. M. J. Byrne. Suppressing valid inferences with conditionals. Cognition, 31:61–83, 1989.
Keith L. Clark. Negation as failure. In H. Gallaire and J. Minker, editors, Logic and Data Bases, volume 1, pages 293–322. Plenum Press,
New York, NY, 1978.
J.St.B.T. Evans. Biases in deductive reasoning. In R.F. Pohl, editor, Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking,
Judgement and Memory. Psychology Press, 2012.
J.St.B.T. Evans, Julie L. Barston, and Paul Pollard. On the conflict between logic and belief in syllogistic reasoning. Memory &
Cognition, 11(3):295–306, 1983. ISSN 0090-502X.
Steffen Hölldobler and Carroline Dewi Kencana Ramli. Logic programs under three-valued Lukasiewicz semantics. In Patricia M. Hill and
David Scott Warren, editors, Logic Programming, 25th International Conference, ICLP 2009, volume 5649 of Lecture Notes in
Computer Science, pages 464–478, Heidelberg, 2009a. Springer.
Steffen Hölldobler and Carroline Dewi Kencana Ramli. Logics and networks for human reasoning. In Cesare Alippi, Marios M. Polycarpou,
Christos G. Panayiotou, and Georgios Ellinas, editors, International Conference on Artificial Neural Networks, ICANN 2009, Part II,
volume 5769 of Lecture Notes in Computer Science, pages 85–94, Heidelberg, 2009b. Springer.
Robert Kowalski. Computational Logic and Human Thinking: How to be Artificially Intelligent. Cambridge University Press, Cambridge,
2011.
Jan Lukasiewicz. O logice trójwartościowej. Ruch Filozoficzny, 5:169–171, 1920. English translation: On three-valued logic. In:
Lukasiewicz J. and Borkowski L. (ed.). (1990). Selected Works, Amsterdam: North Holland, pp. 87–88.
Luı́s Moniz Pereira and Alexandre Miguel Pinto. Inspecting side-effects of abduction in logic programs. In M. Balduccini and Tran Cao
Son, editors, Logic Programming, Knowledge Representation, and Nonmonotonic Reasoning: Essays in honour of Michael Gelfond,
volume 6565 of LNAI, pages 148–163. Springer, 2011.
Keith Stenning and Michiel van Lambalgen. Human Reasoning and Cognitive Science. A Bradford Book. MIT Press, Cambridge, MA,
2008. ISBN 9780262195836.
P. Wason. Reasoning about a rule. Quarterly Journal of Experimental Psychology, 20(3):273–281, 1968.
Contextual Side-effects: Extended Syllogism Sill
Let us extend Sadd by the following conditional:
If something is sold in the streets and not cigarettes, then it is illegal.
Accordingly, the new program is:
Pill = Padd ∪ {illegal(X ) ← streets(X ) ∧ ¬cig (X )}. The minimal explanation for
Oill = {illegal(b) ← >} is Estreets,¬cig = {streets(b) ← >, cig (b) ← ⊥}.
Oill is a possible contextual side-effect of explained Oadd given Pill .
Let us simplify Pill as follows:
Pillsim = Padd ∪ {illegal(X ) ← ¬cig (X )}. Explanation for Oill is the same as for Oadd !
illegal(b) is a strict possible contextual side-effect of explained Oadd , and vice versa!
We need to specify which observation can be abduced and which is
a contextual side effect!
Modeling Extended Syllogism Sill with Inspect
We modify and extend our program accordingly:
Pillinsp
= UPill ∪ IPill ∪ Pill
\({illegal(X ) ← streets(X ) ∧ ¬cig (X )}) ∪
{illegal(X ) ← streets(X ) ∧ inspect¬ (cig (X ))}
where as previously defined for IPill :
inspect¬ (cig (X )) ← ¬cig (X ) ∧ ¬ud(X )
We can only conclude that something is illegal when we already have abduced
somewhere else that it is not a cigarette. Assume Oadd explained by E¬cig :
lmL wc g (Pillinsp ∪ E¬cig ∪ Estreets ) = h{add(b), streets(b), illegal(b), inspect¬ (cig (b))},
{add 0 (b), cig (b), ab1 (b), inex(b), ab2 (b)}i
Note that without E¬cig for Oadd we could not have concluded illegal(b).
Contextual Relevant Consequences
Definition (4)
Given a background knowledge P, two observations O1 and O2 and two explanations
E1 and E2
O2 is a necessary contextual relevant consequence of explained E1 iff for every E2
such that O2 is explained by E2 given P, there exists an E1 consistent
with E2 such that O1 is explained by E1 given P, and the
intersection of E1 and E2 is nonempty.
Let us ensure that something is dangerous before we conclude that it is addictive:
Pdan = Pill \ ({add(X ) ← ¬add 0 (X )}) ∪ {add(X ) ← ¬add 0 (X ) ∧ dangerous(X )}
Given Oadd , we have two minimal explanations:
Ecig ,dan
E¬cig ,dan
=
=
{cig (b) ← >, dangerous(b) ← >},
{cig (b) ← ⊥, dangerous(b) ← >}
Oill = {illegal(b) ← >} is a contextual relevant consequence of explained Oadd .
Note again, that the original observation and the side-effect are interchangeable.
Modeling extended Syllogism, Pdan with Inspect
Again, we intend that Oill is a contextual relevant consequence of explained Oadd :
insp
= UPdan ∪ IPdan ∪ Pdan
Pdan
\({illegal(X ) ← streets(X ) ∧ ¬cig (X )}) ∪
{illegal(X ) ← streets(X ) ∧ inspect¬ (cig (X ))}
The rule
inspect¬ (cig (X )) ← ¬cig (X ) ∧ ¬ud(X )
stays the same and it is easy to see that the outcome is similar as just demonstrated:
illegal(b) can only be concluded when E¬cig is abduced elsewhere, e.g. for Oadd .
Contestable Contextual Side-effects - Abductive Explanation Contesting
Definition (5)
Given a background knowledge P, two observations O1 and O2 and two explanations
E1 and E2
O2 is a necessarily contested contextual side-effect of explained O1 iff for every E1
such that O1 is explained by E1 given P, there exists E2 such that
¬O2 is explained by E2 given P and E1 contains E2 .
O2 is a possibly contested contextual side-effect of explained O1 iff there exists an E1
such that O1 is explained by E1 given P, there exists E2 such that
¬O2 is explained by E2 given P and E1 contains E2 .
Let us define the program Pleg dualistic to Pillsim :
Pleg =Padd ∪ {leg (X ) ← cig (X )}
We originally observe Oadd possibly explained by E¬cig :
lmL wc g (Pleg ∪ E¬cig ) = h{add(b)}, {add 0 (b), cig (b), inex(b), leg (b), . . . }i
¬leg (b) is possible contestable contextual side-effect of explained Oadd , and vice versa.
Extended Syllogism Pleg with Inspect
insp
= UPleg ∪ IPleg ∪ Pleg \ ({legal(X ) ← cig (X )})
Pleg
∪{legal(X ) ← inspect(cig (X ))}
With the modified clause:
{legal(X ) ← inspect(cig (X ))}
We only conclude that something is legal if cig (X ) is used to explain something else.
Given E¬cig , ¬leg (b) is a possible contestable contextual side-effect of explained Oadd .
Contestable Contextual Side-effects - Abductive Rebuttal (1)
Another variation of contestable side-effects is abductive rebuttal:
the side-effect directly contradicts an observation, that is O2 ≡ ¬O1 in Definition 5.
Let us extend Padd by a conditional obviously contradicting the one in Padd :
if something is a cigarette, then it is not inexpensive.
Pinex = Padd ∪ {inex 0 (X ) ← cig (X ), inex(X ) ← ¬inex 0 (X )}
We observe Oinex and can explain it by either E¬cig or by Ecig .
Ecig explains observation Oinex 0 as well and obviously Oinex and Oinex 0 are dualistic.
Contestable Contextual Side-effects - Abductive Rebuttal (2)
The weak completion of g (Pinex ∪ Ecig ) is:
wc g (Pinex ∪ Ecig ) = {add 0 (b) ↔ inex(b) ∧ ¬ab1 (b), add(b) ↔ ¬add 0 (b),
inex(b) ↔ (cig (b) ∧ ¬ab2 (b)) ∨ ¬inex 0 (b), ab2 (b) ↔ ⊥, ab1 (b) ↔ ⊥ ∨ cig (b),
inex 0 (b) ↔ cig (b)} ∪ {cig (b) ↔ >}
and the corresponding least model is:
lmL wc g (Pinex ∪ Ecig ) =h{add(b), ab1 (b), inex(b),
inex 0 (b), cig (b)}, {add 0 (b), ab2 (b)}i
inex 0 (b) is a possibly contested contextual side-effect of explained inex(b).
Extended Syllogism Pinex with Inspect
ext as follows:
Accordingly, we can modify Padd
insp
Pinex
= UPinex ∪ IPinex ∪ Pinex \ ({inex 0 (X ) ←
cig (X )}) ∪ {inex 0 (X ) ← inspect(cig (X ))}
Wtih the modified clause we make sure that something can only be
not inexpensive if it has been explained beforehand that it is a cigarette.
Download