Dynamic Proof Theories For Input/Output Logic

advertisement
Dynamic Proof Theories For Input/Output Logic
Christian Straßer, Mathieu Beirlaen, and Frederik Van De Putte
Centre for Logic and Philosophy of Science
Ghent University
August 28, 2013
Abstract
We translate the unconstrained and constrained input/output-logics
from [17, 18] to reflexive modal logics, using adaptive logics for the constrained case. The resulting reformulation has various advantages. First,
we obtain a proof-theoretic (dynamic) characterization of input/output
logics. Second, we demonstrate that our modal framework gives naturally
rise to useful variants. Finally, the modal logics display a gain in expressive power over their original counterparts in the input/output framework.
1
Introduction
Input/output-logic Input/output logic (henceforth I/O-logic) was introduced
by Makinson and van der Torre [17, 18] as a formal tool for modeling reasoning
with conditionals. As argued in [20], its main motivation concerns problems of
deontic logic.1 I/O-logics have been used e.g. to model various types of permissions [19, 28], to capture the dynamics of normative systems and regulations [7],
and to formalize reasoning with contrary-to-duty obligations [20].2
Technically speaking, I/O-logics (without constraints, cf. infra) are operations that map every couple hG, Ai to an “output”, where (i) G is a set of
“input/output pairs” (A, B) where A and B are propositional formulas; (ii) A
stands for “input”, i.e. a set of propositional formulas; and (iii) the output is
also a set of propositional formulas. Exact definitions of the I/O-logics from
[17] will be given in Section 2.
In a deontic setting, A usually represents factual information, G is a set of
conditional obligations, and the output consists of what is obligatory, given the
facts at hand. However, as stressed in [17, 18], one may also apply I/O-logic
in other contexts. For instance, in a default logic setting the output can be
1 See also [11] for a survey of ten such problems, which are made formally precise by means
of the I/O-terminology.
2 We speak of I/O-logic (singular) to denote the overall framework that is common to a
number of systems, which we call I/O-logics. Alternatively, we will sometimes call the latter
I/O-operations or I/O-functions.
1
interpreted as a set of hypotheses, derived from the data A and a set G of
(normal) defaults.
In either of these interpretations, it is often useful to consider a set C of
constraints on the output – this notion was introduced in [18]. C restricts the
output in one of two senses, each corresponding to a different style of reasoning.
We can require the output as a whole to be consistent with C, or we can impose
the weaker requirement that, for each A in the output, {A} ∪ C has to be
consistent. Depending on the application context C may represent physical
constraints, human rights, etc.
In view of these two styles of reasoning, Makinson and van der Torre define
variants of their I/O-logics. Given a set of I/O-pairs G, the basic idea is to
reason on the basis of maximal subsets of G that produce an output that is
consistent with the constraints in C. See Section 4 for the exact definitions and
more explanation. We will henceforth speak of constrained I/O-logics to denote
these operations.
This paper Our aim is to represent I/O-logics as modal systems. This has
several advantages, which are discussed below. In order to explain these, we
first give a rough sketch of what the representation looks like.
For the unconstrained case, the main idea is that we push meta-level expressions such as A ∈ A or (A, B) ∈ G into the object-level, resulting in expressions
like inA and A → B respectively, where in is a unary modal operator and → a
binary connective. The exact definitions are spelled out in Section 3.
For the constrained case we use a richer, more dynamic formalism. First, in
Section 6, we further extend the modal language – e.g. adding a modal operator con to express constraints. Second, the resulting monotonic modal logic is
strengthened by means of a defeasible mechanism, which mimics the selection
of maximal sets of I/O-pairs at the object-level – see Section 7 for the details.
As regards the second point, we use adaptive logics (henceforth ALs).3 These
are (usually) non-monotonic logics which have a characteristic proof theory that
is sound and complete with respect to a selection semantics in the sense of
[14]. In this paper, we focus on the proof theory of the adaptive systems; their
semantics is given in Appendix E.
Motivation There are various independent motivations for the formal work
presented in the current paper. First and foremost, characterizing I/O-logics by
means of ALs enables us to use dynamic proof theories for modeling reasoning
with conditionals (see Section 7). An important advantage of adaptive proofs
is that they allow for the representation of various forms of the defeasibility of
human reasoning - we illustrate this point in Section 5.
Second, due to its modularity, the modal framework allows for variation in
a controlled and systematic way (see Section 9). For instance, different types
3 ALs have been developed for various types of defeasible reasoning in the past. More
recently, deontic ALs have been used to model reasoning with conflicting obligations and
conditional detachment [4, 5, 10, 21, 29, 31, 33].
2
of conflict-handling – so-called “strategies” in the AL terminology – result in
alternative, yet equally well-behaved systems, some of which have not yet been
defined in the context of I/O-logic. Also, one can easily alter the set of rules
that govern the conditionals, adapting it to a given application.
Third, our formalism has a greater expressive power than the original framework of I/O-logics (see Section 10). For instance, one can now also express
disjunctive inputs, relations between input and output, conflicts between conditionals, etc. in the object language. As we shall argue in Section 9, this allows
us to model cases of imperfect information in a natural way.
2
Recapitulation: unconstrained I/O logic
Preliminaries Let W be the set of all propositional formulas of classical logic
(CL) with the usual connectives ∧, ∨, ⊃, ¬ and ≡ based on the propositional letters a, b, . . .. W also contains the verum constant ⊤ and the falsum constant ⊥.
We will use capital letters A, B, . . . as meta-variables for propositional formulas.
Where L is a logic, Γ is a set of L-wffs (well-formed formulas) and A is a
wff of L, we write Γ ⊢L A (A ∈ CnL (Γ)) to denote that A is L-derivable from
Γ, and ⊢L A to denote that A is L-derivable from the empty premise set.
Characterizing the I/O-logics In this section, we define the unconstrained
I/O-logics from [17]. These are defined as functions that map each pair hA, Gi
with a set of facts A ⊆ W and a set of I/O-pairs G ⊆ W × W to an output set
outR (A, G) ⊆ W. We sometimes refer to G as the set of generators. As shown
in [17], each of these functions can be characterized in two equivalent ways –
one is called “semantic”, the other “syntactic”. In the remainder of this section
we define the I/O-functions along the lines of the syntactic characterization.
G
A
Phase 1: Closure
GR
CnCL (A)
Phase 2: Detachment
A
(A,B)
B
output
Figure 1: Syntactic characterizations of I/O-logics: general form.
3
In the following we will denote sets of rules for I/O-pairs by R. Typical
candidates are:
If A ⊢CL B and (B, C), then (A, C).
(SI)
If A ⊢CL B and (C, A), then (C, B).
If (A, B) and (A, C), then (A, B ∧ C).
If (A, C) and (B, C), then (A ∨ B, C).
(WO)
(AND)
(OR)
If (A, B) and (A ∧ B, C), then (A, C).
(A, A).
(CT)
(ID)
(⊤, ⊤).
(Z)
By GR we denote the closure of G under R.4 The general form of the syntactic
construction is given by Definition 1.5
Definition 1. outR (G, A) =df B | for some A ∈ CnCL (A), (A, B) ∈ GR
Figure 1 gives a schematic representation of this construction.
Table 1 shows how the standard I/O-operations are obtained by combinations of these rules. Note that all of them make use of (SI), (OR) and (WO),
but differ with respect to the other three rules. To cover the border case where
G = ∅, one needs to add the (zero premise) rule (⊤, ⊤) in order to ensure that all
tautologies are in the output (see [18, p. 157]). In the remainder of this paper,
we use R as a metavariable for the resulting eight sets of rules.6
out1
out2
out3
out4
out+
1
out+
2
out+
3
out+
4
(Z)
X
X
X
X
X
X
X
X
(SI)
X
X
X
X
X
X
X
X
(WO)
X
X
X
X
X
X
X
X
(AND)
X
X
X
X
X
X
X
X
(OR)
(CT)
(ID)
X
X
X
X
X
X
X
X
X
X
X
X
Table 1: The I/O-functions in terms of the rules they use.
When giving examples in this paper, we will only use the operation of simpleminded output out1 . Although arguably, this operation is too weak for many
4 As usual, the closure of a set X under a set of rules R is the smallest superset of X that
is closed under applications of rules in R.
5 Our definition differs slightly from the original definition in [18] according to which
Vn
B ∈ outR (G, A) iff (⋆) there are A1 , . . . , An ∈ A such that
1 Ai , B ∈ GR . For all standard
I/O-logics (see below) the definitions are equivalent, or more specifically, whenever (SI) is in R.
This follows immediately in view of the following equivalence: (⋆) iff there is a C ∈ CnCL (A)
such V
that (C, B) ∈ GR . “(⇒)”
Vn is trivial. “(⇐)”: By CL-properties there are A1 , . . . , An such
that n
1 Ai ⊢ C. By (SI),
1 Ai , B) ∈ GR .
6 out+ and out+ coincide. This was first noted in [18].
2
4
4
concrete applications, this will allow us to focus on the formal novelties that
are introduced in the current paper. For an elaborate discussion of the other
I/O-functions, the reader is referred to [17, 18, 20].
Let us note some general properties of the I/O-functions. The first two facts
show that Definition 1 warrants syntax-independency:7
Fact 1. Where CnCL (A) = CnCL (A′ ), outR (G, A) = outR (G, A′ ).
Fact 2. Where GR = GR′ , outR (G, A) = outR (G ′ , A).
Finally, the output is closed under CL:
Theorem 1. outR (G, A) = CnCL outR (G, A) .
Proof. (⇒) trivial. (⇐) Suppose
Vn A ∈ CnCL (outR (G, A)). Let B1 , . . . , Bn ∈
outR (G, A) be such that (†) 1 Bi ⊢ A. By Definition
Vn 1, there are Ci ∈
CnCL (A) (i ≤ n) such
that
(C
,
B
)
∈
G
.
By
(SI),
(
) ∈ GR for each
R
1 Cj , BiV
Vn
Vn i i
n
i ≤ n.VBy (AND), ( 1 Ci , 1 Bi ) ∈ GR . By (WO) and (†), ( 1 Ci , A) ∈ GR .
n
Since 1 Ci ∈ CnCL (A), also A ∈ outR (G, A).
Example 1 illustrates the interplay between input, generators and output for
the specific case of out1 .
Example 1. Let A = {a, b}, G = {(a, c), (b, d), (a ∧ b, e)}. By (SI) we can derive
(a ∧ b, c) and (a ∧ b, d) from (a, c) and (b, d) respectively. By (AND), we obtain
(a ∧ b, c ∧ d). Since a ∧ b ∈ CnCL (A), (c ∧ d) ∈ out1 (A, G).
3
A modal translation
In this section, we provide a modal logic characterization of the I/O-operations
from the preceding section.8 As we will argue in Section 10, the resulting systems
allow us to model reasoning about inputs, conditionals and output in a natural
and very expressive language.
Where, as before, W is the set of well-formed formulas of CL, the set W ′ of
well-formed formulas of the logic MIOR is given by the following grammar:
W ′ := hWi | inhWi | outhWi | hWi → hWi | ¬hW ′ i | hW ′ i ∨ hW ′ i |
hW ′ i ∧ hW ′ i | hW ′ i ⊃ hW ′ i | hW ′ i ≡ hW ′ i
The modal operators in and out are used to model input, resp. output. Note
that no nested occurrences of in or out are allowed. Both are characterized as
KD-modalities. Where π ∈ {in, out}:
⊢ π(A ⊃ B) ⊃ (πA ⊃ πB)
(Kπ)
7 Fact 1 does not hold in general for the original definition (see Footnote 5) in case SI is
not derivable from R. Examples can easily be constructed.
8 In [17] Makinson and van der Torre also present modal characterizations of some (unconstrained) I/O-functions. However, their translation does not cover the four cases where (OR)
is invalid. Moreover, it is hard to see how this translation can be adjusted to the context
of constrained I/O-logics. For this reason, we present a different – more modular – modal
characterization. We return to this point in Section 9.
5
⊢ πA ⊃ ¬π¬A
If ⊢CL A then ⊢ πA
(Dπ)
(NECπ)
I/O-pairs (A, B) are written as conditionals of the form A → B. We also do
not allow for nestings of →. Outputs are generated from inputs and I/O-pairs,
using the following ‘detachment axiom’:
⊢ (inA ∧ (A → B)) ⊃ outB
(DET)
Finally, we translate the set of rules, by replacing each pair (A, B) with
a conditional (A → B). For instance, the rules (SI), (WO) and (AND) are
translated as follows:
If A ⊢CL B and B → C, then A → C
If A ⊢CL B and C → A, then C → B
(SI)
(WO)
If A → B and A → C, then A → (B ∧ C)
(AND)
Where R is given, let R→ denote the associated set of rules for conditionals.
Definition 2 (Modal I/O-logics). The logic MIOR is defined by adding all
of (Kin), (Din), (Kout), (Dout), (DET) to the axioms of CL, and closing the
resulting set under (NECin) (NECout), R→ and modus ponens (MP).
Note that, in view of (SI) and (WO) respectively, the conditional → satisfies
replacement of equivalents both for its antecedent and its consequent:
If ⊢CL A ≡ A′ , then ⊢MIOR (A → B) ≡ (A′ → B)
(REA)
If ⊢CL B ≡ B ′ , then ⊢MIOR (A → B) ≡ (A → B ′ )
(REC)
‘Premises’ in I/O-logic are given by the set A of factual input and the set G
of I/O-pairs. In our modal framework, we translate them to a set ΓG,A :
Definition 3. ΓG,A = {inA | A ∈ A} ∪ {A → B | (A, B) ∈ G}
This completes our modal characterization of the I/O-functions, resulting in
the following representation theorem:9
Theorem 2. Where A is CL-consistent: A ∈ outR (G, A) iff ΓG,A ⊢MIOR outA.
Note that our representation theorem does not cover the border case where
A is inconsistent. In this case, it follows from the KD-properties of in that
ΓG,A ⊢MIOR ⊥ and hence also ΓG,A ⊢MIOR outA for all A ∈ W. On the other
hand, where A is inconsistent, outR (G, A) need not be trivial (in case (ID) is
not in R). Take the simple example G = ∅ in which case outR (G, A) = CnCL (∅).
In the remainder, let MIO1 denote the modal logic that corresponds to
the operation of simple-minded output out1 defined in the previous section.
We briefly illustrate MIO1 by our previous example. Recall, A = {a, b} and
9 See
Section A in the appendix for the proof.
6
G = {(a, c), (b, d), (a ∧ b, e)}. Applying Definition 3, we obtain the premise set
Γ1 = {ina, inb, a → c, b → d, (a ∧ b) → e}. By the KD-properties of in, we can
derive in(a ∧ b) from the first two premises. By the rule (SI), we can derive
(a ∧ b) → c and (a ∧ b) → d from the third and fourth premise respectively.
Applying (AND), we obtain (a ∧ b) → (c ∧ d). Finally, by (DET) and (MP), we
can derive out(c ∧ d) from in(a ∧ b) and (a ∧ b) → (c ∧ d). Also, from in(a ∧ b)
and the premise (a ∧ b) → e, we can derive oute by means of (DET) and (MP).
4
Recapitulation: constrained I/O logic
In [18], Makinson and van der Torre extend their I/O-framework in order to deal
with excess output. Such an excess can arise in various ways. The output may be
inconsistent per se, or the output may be inconsistent with the input. Suppose,
for instance, that G = {(⊤, ¬a), (a, b)} and A = {a}. Then out1 (G, A) =
Cn({¬a, b}) is consistent, but inconsistent with the input a. For the operations
that use the rule (ID), both types of excess coincide.
More generally, one may also think of excessive output as output which
conflicts with certain physical, practical or normative principles. To cover all
such cases, a constraint set C ⊆ W is introduced. The cases C = ∅ and C = A
allow us to express consistency of output, and its consistency with input A
respectively.
The strategy used by Makinson and van der Torre for eliminating excess
output is “to cut back the set of generators to just below the threshold of yielding
excess” [18, p. 160]. Using well-known techniques from nonmonotonic logic, they
prune the set of generators to obtain its maximal non-excessive subsets.
Definition 4. maxfamilyR (G, A, C) is the family of all maximal H ⊆ G such that
outR (H, A) is consistent with C.
Analogous to the formalisms familiar from logics of belief change and nonmonotonic logic, Definition 4 gives rise to operations of full meet and full join
constrained output.
Definition 5 (Full meet constrained output).
\
out∩
{outR (H, A) | H ∈ maxfamilyR (G, A, C)}
R (G, A, C) =df
Definition 6 (Full join constrained output).
[
out∪
{outR (H, A) | H ∈ maxfamilyR (G, A, C)}
R (G, A, C) =df
∪
Conventionally we set out∩
R (G, A, C) = W and outR (G, A, C) = W for the
border case that maxfamilyR (G, A, C) = ∅. Note that this is exactly the case in
which outR (∅, A) ∪ C is not consistent.
As mentioned in the introduction, there are two ways in which one may pose
constraints on the output. According to the first, the output set as a whole is
7
required to be consistent with C; according to the second, each formula A in the
output is required to be consistent with C. Definition 5 gives us an operation
that respects the first requirement, whereas Definition 6 results in an operation
that respects the second.
Example 2. Let again G = {(a, c), (b, d), (a ∧ b, e)} and A = {a, b}. We add
the constraint set C = {¬(c ∧ d)}. We have: maxfamily1 (G, A, C) = {H, H′ },
where H = {(a, c), (a ∧ b, e)} and H′ = {(b, d), (a ∧ b, e)}. By Definition 5,
∩
∩
c 6∈ out∩
1 (G, A, C) and d 6∈ out1 (G, A, C). However, c ∨ d ∈ out1 (G, A, C). To
see why, note that by (WO), (a, c ∨ d) ∈ HR and (b, c ∨ d) ∈ HR′ . Hence
c ∨ d ∈ out1 (H, A) ∩ out1 (H′ , A).
∪
By Definition 6, c ∈ out∪
1 (G, A, C) and d ∈ out1 (G, A, C). However, c ∧ d 6∈
∪
out1 (G, A, C).
This also illustrates that Theorem 1 does not hold for out∪
R (although it
∩
∪
,
as
can
easily
be
shown).
Facts
1
and
2
hold
for
out
holds for out∩
R and outR .
R
In [18, Section 6] Makinson and van der Torre investigate constrained I/Ologics in terms of so-called constrained derivations. This way, they obtain a
syntactic characterization of the full join constrained I/O-operations for the
specific case where C = A = {A} for an A ∈ W (see [18, Observation 9]). In
the Sections 7 and 8, we provide a clear-cut proof theory for all constrained
I/O-operations, for the general case of arbitrary A ⊆ W and C ⊆ W.
5
Modeling defeasibility
∪
The constrained I/O-logics out∩
R and outR are non-monotonic (unlike the unconstrained I/O-logics): adding input may result in the loss of certain output.10
For instance, where A = {a}, G = {(a, b), (c, ¬b)}, we have b ∈ out∩
1 (G, A),
whereas b 6∈ out∩
1 (G, A ∪ {c}).
In Pollock’s terminology [24], this feature of I/O-logics sheds light on the
synchronic defeasibility of reasoning with conditionals: new information may
defeat some of our previous conclusions. However, what is still lacking, is a
generic framework that has the expressive power of I/O-logics, but allows us
to model what Pollock dubs diachronic defeasibility [24].11 That is, we often
retract conclusions simply because we gained more insight into the facts at hand,
without having added any input.
Suppose, for instance, that a lawyer has read the entire file – about 8.000
pages – of a murder case, in preparation of her plea at the trial. It is sensible
to assume that she takes into account the most obvious consequences of the
statements made in this file. However, we cannot expect her to notice every
possible anomaly or argument, given that she has a limited amount of time
before the trial takes place. So after it turns out that the case is suspended for
10 The same applies mutatis mutandis to the addition of I/O-pairs for out∩ . See [18, ApR
pendix A.1] for an overview of monotony and antitony properties of constrained I/O-logics.
11 In [2], the terms external and internal dynamics are used to refer to what Pollock calls
diachronic, resp. synchronic defeasibility.
8
another couple of weeks, she may well study the file further, and upon doing so
realize that some of her conclusions require revisions.
A second, related point concerns what we call case-specific defeasibility. That
is, when reasoning defeasibly, we usually retract inferences locally, not globally.12
We do not start reasoning all over from scratch whenever new input is added, or
whenever we have gained the insight that something is wrong with our previous
conclusions – we rather tend to just withdraw some of our previous inferences,
but stick to all others that remain unproblematic. However, to do so, one has to
consider the output not in terms of an abstract operation, but as the result of
concrete and distinct inferences from certain premises. Only then will it become
clear which of those inferences should be withdrawn in view of the new evidence.
This is where ALs are convenient. More specifically, in an adaptive proof,
one can “jump” to a conclusion by relying on defeasible assumptions, and there
is a simple mechanism that models the retraction of certain conclusions. This
way, ALs allow us to explicate defeasibility in a formally precise way. Moreover,
the fact that one has to keep track of the assumptions needed to obtain a given
conclusion, enables us to model the case-specific retraction of conclusions in the
light of new input. Both aspects of the adaptive proof theory will be further
illustrated and made formally precise in Section 7.
6
An extended modal representation
Before we can provide proof theories for the constrained I/O-operations, we
need to extend our modal characterization from Section 3. We define a new
language W c which extends W ′ in two ways. First we add a modal operator
con for modeling constraints. Second, we add a unary operator • which we
use for prefixing I/O-pairs in the set G (the role of the •-operator is addressed
below).
W c := hWi | inhWi | outhWi | conhWi | hWi → hWi | •(hWi → hWi) |
¬hW c i | hW c i ∨ hW c i | hW c i ∧ hW c i | hW c i ⊃ hW c i | hW c i ≡ hW c i
The modal operator con is characterized as a KD-modality:
⊢ con(A ⊃ B) ⊃ (conA ⊃ conB)
(Kcon)
⊢ conA ⊃ ¬con¬A
If ⊢CL A then ⊢ conA
(Dcon)
(NECcon)
Moreover, we add an axiom schema expressing that outputs should respect
constraints:
⊢ conA ⊃ ¬out¬A
(ROC)
12 See
also [34] where a similar case is made for belief revision.
9
Definition 7. CMIOR is obtained by adding (Kcon), (Dcon), and (ROC) to
the axioms of MIOR and by closing the resulting set under (NECin) (NECout),
(NECcon), R→ and modus ponens.13
As mentioned above, Makinson and van der Torre deal with constraints
by pruning the generating set G. We introduce the bullet operator • for the
object-level representation of this selection of I/O-pairs or, in our framework,
of conditionals. Whereas in the constrained I/O-systems I/O-pairs in G are
selected, in our system conditionals are activated by means of removing the
bullet. First, I/O-pairs in G are prefixed with the • operator, which functions
as a dummy operator. Next, a non-activated conditional •(A → B) is activated
by inferring from it the conditional A → B. Once activated, we can detach the
output of triggered conditionals by means of the rule (DET) from Section 3.
Definition 3 is adjusted accordingly:
Definition 8. ΓG,A,C =df {inA | A ∈ A} ∪ {•(A → B) | (A, B) ∈ G} ∪ {conA |
A ∈ C}
For instance, where as before, G = {(a, c), (b, d), (a ∧ b, e)}, A = {a, b} and
C = {¬(c ∧ d)}, this is translated into the premise set Γ1 = {•(a → c), •(b →
d), •((a ∧ b) → e), ina, inb, con¬(c ∧ d)}.
The systems characterized in Definition 7 are not yet equipped with a rule
for activating conditionals. We cannot simply add the rule “If •(A → B), then
A → B”. Instead, we need a logic that can distinguish between cases in which
activating a conditional is sensible and cases in which it is not. For instance,
given the set Γ = {ina, inb, •(a → c), •(b → d), con¬d}, we want to be able
to derive a → c from •(a → c) so that outc is derivable by means of (DET).
However, given the constraint ¬d we also want to block the derivation of b → ¬d
from •(b → ¬d), otherwise we could again apply (DET) in order to detach outd.
What we are looking for, then, is a defeasible mechanism for strengthening
the logics from Definition 7. That is, we are looking for a mechanism that
enables us to infer activated conditionals A → B from non-activated conditionals
•(A → B), in such a way that B is in the output of G given A whenever A ∈ A
and (A, B) ∈ G unless e.g. there is a constraint preventing the derivation of
B. In what follows, we characterize such a mechanism which is rich enough to
characterize, for instance, all the I/O-functions defined in Section 4.
7
Dynamic proofs for I/O-logic
Adaptive logics. The proof theory to be provided for the constrained I/Ooperations from Section 4 is that of the adaptive logics framework (see e.g. [2, 30]
for a general introduction). An AL is usually characterized as a triple:
13 In view of the (NECout) and the K-properties of con, (Dcon) can be derived using (ROC):
By the contraposition of (ROC) and out¬⊥ we get ¬con⊥. This is K-equivalent to ¬(conA ∧
con¬A) and hence to (Dcon).
10
(a) ALs are built ‘on top’ of a lower limit logic (LLL). The AL allows for
the application of all inferences valid in the LLL. The LLL has to be
monotonic, transitive, reflexive and compact.14
(b) ALs strengthen their LLL by considering a set of formulas as false ‘as
much as possible’. This set of formulas is called the set of abnormalities
and is denoted by Ω.
(c) The phrase ‘as much as possible’ in (b) is disambiguated by an adaptive
strategy. The strategy further specifies how to proceed in cases where e.g.
we know that at least one of a number of abnormalities is true, but we do
not know which.
We will present and illustrate the dynamic proof theory ALs by means of the
∩
∪
∪
ALs MIO∩
1 and MIO1 , the adaptive counterparts of out1 and out1 respectively.
In the next section, we move to a more general level and define the adaptive
counterparts of all constrained I/O-operations.
Let us start with MIO∩
1 . The LLL of this system is the logic CMIO1 as
defined in Section 6. Its set Ω of abnormalities is defined as follows:
Ω =df {•(A → B) ∧ ¬(A → B) | A, B ∈ W}
In what follows, we use (A → B) as an abbreviation for •(A → B) ∧ ¬(A →
B). Abnormalities are generated, for instance, when a conditional is ‘triggered’ by the input while its consequent violates a constraint. Let, for example G = {(a, ¬b)}, A = {a}, and C = {b}, such that Γ1 = ΓG,A,C =
{•(a → ¬b), ina, conb}. We show that the abnormality (a → ¬b) is a CMIO1 consequence of Γ1 .
By (ROC), we know that ¬out¬b is derivable from conb. By (DET) and
CL, it follows that ¬(ina ∧ (a → ¬b)). Since ina ∈ Γ1 , it follows by CL that
Γ1 ⊢CMIO1 •(a → ¬b) ∧ ¬(a → ¬b).
The motor that drives the activation of conditionals is the (defeasible) assumption that abnormalities are false ‘as much as possible’. Assume for instance that •(a → b). Then since ⊢CMIO1 (a → b) ∨ ¬(a → b) it follows that
•(a → b) ⊢CMIO1 (a → b) ∨ (a → b). If we can safely assume the second disjunct to be false, then the first must be true. This is —on an intuitive level—
how the adaptive proof theory will allow us to activate conditionals. Let us now
make this idea formally precise.
Adaptive proofs. A line in an annotated adaptive proof consists of four
elements: a line number l, a formula A, a justification (consisting of a list of
line numbers and a derivation rule), and a condition ∆. The condition of a line
is a (possibly empty) finite set of abnormalities. Intuitively, we interpret a line
14 A logic L is reflexive iff, for all sets Γ of wffs of L, and for all L-wffs A, Γ ⊆ Cn (Γ); it
L
is transitive iff, for all sets of wffs Γ and Γ′ , if Γ′ ⊆ CnL (Γ) then CnL (Γ ∪ Γ′ ) ⊆ CnL (Γ); it
is monotonic iff, for all sets of wffs Γ and Γ′ , CnL (Γ) ⊆ CnL (Γ ∪ Γ′ ); and it is compact iff (if
Γ ⊢L A, then Γ′ ⊢L A for some finite Γ′ ⊆ Γ).
11
l at which a formula A is derived on the condition ∆ as “At line l of the proof,
A is derived on the assumption that all members of ∆ are false”.
ALs have three generic rules of inference. The first is a premise introduction
rule PREM, which allows premises to be introduced on the empty condition at
any stage in the proof.
..
..
.
.
PREM If A ∈ Γ:
∅
A
The second rule is the unconditional rule RU, which allows for the use of all
CMIO1 -inferences:
If A1 , . . . , An ⊢CMIO1 B:
RU
A1
..
.
∆1
..
.
An
B
∆n
∆1 ∪ . . . ∪ ∆n
The third generic rule of inference is the conditional rule RC. Where Θ is
a finite set of abnormalities, let Dab(Θ) denote the classical disjunction of the
members of Θ.15 The rule RC is defined as follows:
If A1 , . . . , An ⊢CMIO1 B ∨Dab(Θ)
RC
A1
..
.
∆1
..
.
An
B
∆n
∆1 ∪ . . . ∪ ∆n ∪ Θ
RC is the rule that allows for the activation (i.e., the removal of the ‘•’prefix) and the subsequent detachment of conditionals. For instance, let Γ2 =
{•(a → b), ina}. We can start a MIO∩
1 -proof by introducing the members of Γ2
by means of the rule PREM:
1
2
•(a → b)
ina
PREM
PREM
∅
∅
By CL we know that •(a → b) ⊢CMIO1 (a → b) ∨ (a → b). Hence we can
derive (a → b) ∨ (a → b) by an application of RU to line 1:
3
(a → b) ∨ (a → b)
1; RU
∅
Note that the second disjunct of the formula derived at line 3 is a member
of Ω. By RC, we can move this abnormality to the condition column:
4
a→b
3; RC
{ (a → b)}
At stage 4 of the proof, we have derived the formula a → b on the assumption
that the abnormality (a → b) is false. The conditional at line 1 has been
activated at line 4. By (DET), we know that ina ∧ (a → b) ⊢CMIO1 outb. Hence
we can detach the output outb by means of RU:
15 “Dab” abbreviates “Disjunction of Abnormalities”.
Dab(Θ) = A.
12
If Θ is a singleton {A}, then
5
2,4; RU
outb
{ (a → b)}
Note that, as required by the definition of RU, the condition of line 4 is
carried over to line 5. The example shows how activated conditionals, when
triggered by a matching input, can be used to detach the output.
The inference of outb from •(a → b) and ina is conditional. It depends on
our assumption that (a → b) is false. As we will now illustrate, assumptions
made in adaptive proofs are sometimes treated as inadmissible. In such cases,
all inferences that depend on the assumptions in question are retracted from
the proof.
Retracting inferences.
{•(a → b), ina, con¬b}:
1
2
3
4
Consider the following adaptive proof from Γ3 =
•(a → b)
ina
con¬b
outb
PREM
PREM
PREM
1,2; RC
∅
∅
∅
{ (a → b)}
As there is a constraint prohibiting that b is in the output, it seems that we
have jumped to an incorrect conclusion. In order to deal with such cases, ALs
are equipped with a mechanism that determines the retraction or ‘marking’ of
lines of which the condition can no longer be upheld.
Note that, by (ROC), con¬b ⊢CMIO1 ¬outb. By (DET) and CL, con¬b ⊢CMIO1
¬(ina ∧ (a → b)). As ina, con¬b ∈ Γ3 , it follows that Γ3 ⊢CMIO1 ¬(a → b). But
then Γ3 ⊢CMIO1 (a → b), which falsifies our assumption made at line 4:
4
5
1,2; RC
1-3; RU
outb
(a → b)
{ (a → b)}X5
∅
At line 4 we mistakenly assumed that (a → b) is false. Once our assumption
is shown to be inadmissible (at line 5), line 4 is marked (using the symbol X5 ),
which means that this inference is now withdrawn from the proof.
The retraction mechanism is governed by a marking definition, which depends on the adaptive strategy. In the remainder of this section, we present
two such strategies and their respective marking definitions. They correspond
to various ‘styles’ of reasoning —skeptical versus credulous—, as we will now
illustrate.
Consider a proof from Γ4 = {•(a → c), •(b → d), ina, inb, con¬(c ∧ d)}:
1
2
3
4
5
6
7
•(a → c)
•(b → d)
ina
inb
con¬(c ∧ d)
outc
outd
PREM
PREM
PREM
PREM
PREM
1,3; RC
2,4; RC
∅
∅
∅
∅
∅
{ (a → c)}
{ (b → d)}
As out is a normal modal operator, we can aggregate the formulas derived
at lines 6 and 7:
13
8
out(c ∧ d)
6,7; RU
{ (a → c), (b → d)}
Clearly, something went wrong here. As there is a constraint prohibiting
that the conjunction of c and d is in the output, we were too hasty in deriving
out(c ∧ d). And indeed, although no abnormality is by itself derivable from the
premises, the disjunction of abnormalities (a → c) ∨ (b → d) is a CMIO1 consequence of Γ4 :16
9
(a → c) ∨ (b → d)
1-5; RC
∅
At stage 9 of the proof, we know that one of the abnormalities (a → c) or
(b → d) holds, but we lack the information to determine which one. Clearly,
line 8 should be retracted, as at that line we assumed both abnormalities to be
false. But what about lines 6 and 7? If only the abnormality (a → c) turns
out to be derivable, then our assumption at line 7 that (b → d) is false can
safely be upheld. If only (b → d) turns out to be derivable, then the same
holds for our assumption at line 6 that (a → c) is false.
In view of this information a skeptical reasoner may consider both assumptions too strong. As a result, she would retract lines 6 and 7 from the proof.
A more credulous reasoner, on the other hand, may continue to reason on one
assumption or the other, as none of them have been falsified beyond doubt. The
credulous reasoner, then, would leave lines 6 and 7 unmarked.
The minimal abnormality strategy explicates the more skeptical reasoning,
while the normal selections strategy explicates the more credulous reasoning.
MIO∩
1 uses the minimal abnormality strategy, and hence models the more skeptical type of reasoning.
The minimal abnormality strategy. The marking definition for the minimal abnormality strategy requires some further terminology. A Dab-formula
Dab(∆) derived at stage s is minimal at that stage iff Dab(∆) is derived on the
empty condition, and no Dab(∆′ ) with ∆′ ⊂ ∆ is derived on the empty condition at stage s. Note that in the above proof, the only (minimal) Dab-formula
at stage 9 is the disjunction derived on line 9.
Let a choice set of Σ = {∆1 , ∆2 , . . .} be a set that contains one element
out of each member of Σ. A minimal choice set of Σ is a choice set of Σ of
which no proper subset is a choice set of Σ. Where Dab(∆1 ), Dab(∆2 ), . . . are
the minimal Dab-formulas at stage s of a proof, Φs (Γ) is the set of minimal
choice sets of {∆1 , ∆2 , . . .}. So in the above proof, we have Φ9 (Γ4 ) = {{ (a →
c)}, { (b → d)}}.
Marking for the minimal abnormality strategy is defined as follows:
Definition 9. Where A is derived at line l of a proof from Γ on a condition ∆,
line l is marked at stage s iff
16 By (ROC), (†) con¬(c ∧ d) ⊢
CMIO1 ¬out(c ∧ d). By (DET) and CL, ina ⊢CMIO1
¬(a → c) ∨ outc and inb ⊢CMIO1 ¬(b → d) ∨ outd. Altogether, ina, inb ⊢CMIO1 ¬(a →
c) ∨ ¬(b → d) ∨ (outc ∧ outd). By KD, ina, inb ⊢CMIO1 ¬(a → c) ∨ ¬(b → d) ∨ out(c ∧ d).
But then, by (†) and CL, con¬(c ∧ d), ina, inb ⊢CMIO1 ¬(a → c) ∨ ¬(b → d). By CL again,
Γ4 ⊢CMIO1 (•(a → c) ∧ ¬(a → c)) ∨ (•(b → d) ∧ ¬(b → d)).
14
(i) there is no Θ ∈ Φs (Γ) such that Θ ∩ ∆ = ∅, or
(ii) for some Θ ∈ Φs (Γ), there is no line at which A is derived on a condition
∆′ for which Θ ∩ ∆′ = ∅.
In view of this definition it follows that all of lines 6–8 are marked, if we
use the minimal abnormality strategy. For line 8, this is obvious: clause (i) of
Definition 9 clearly obtains. For lines 6 and 7, clause (i) fails, but clause (ii)
holds. For instance, there is no line in the proof at which outc is derived on a
condition that has an empty intersection with { (a → c)}.
We now illustrate why Definition 9 refers to other lines than l in its clause
(ii). Suppose we continue our proof, weakening the output obtained at lines 7
and 8:
10 out(c ∨ d)
6; RU { (a → c)}
11 out(c ∨ d)
7; RU { (b → d)}
Note that, since we have not derived any new Dab-formulas, Φ11 (Γ4 ) =
Φ9 (Γ4 ) = {{ (a → c)}, { (b → d)}}. Note that for each Θ ∈ Φ11 (Γ4 ), out(c ∨ d)
is derived on a condition that does not intersect with it. Hence, by Definition
9, lines 10 and 11 are unmarked at stage 11 of the proof.
Two notions of derivability. In adaptive proofs, markings may come and
go. Certain lines may be marked at some stage s, while they may not be marked
at a different stage s′ . This can be so for several reasons:
(i) New minimal Dab-formulas may be derived at stage s′ — hence certain
assumptions which were tenable at s are no longer tenable at s′ .
(ii) Some Dab(∆) may be a minimal Dab-formula at stage s, but not at a later
stage s′ — hence certain assumptions which were not tenable at stage s
may again become tenable at stage s′ .
(iii) Where A was already derived on the conditions ∆1 , . . . , ∆n at stage s, it
may be derived on additional conditions Θ1 , . . . , Θm at stage s′ — hence at
s′ we have shown that A can be derived by relying on different assumptions.
For an example of (iii), consider again the last proof from this section. At
stage 10, out(c∨d) is only derived on the condition { (a → c)}, which intersects
with the minimal choice set { (a → c)}. Hence, at this stage, line 10 is marked.
However, at the next stage, we have also derived out(c ∨ d) on the condition
{ (b → d)}. As a result, line 10 is unmarked at stage 11.
A dynamic, stage-dependent notion of derivability is easy to define:
Definition 10. A formula A has been derived at stage s of an adaptive proof
iff, at that stage, A is the second element of some unmarked line l.
This notion of derivability provides a formal explication of Pollock’s synchronic type of defeasibility (see Section 5). Throughout an adaptive proof,
we not only derive more formulas, but also block the derivation of formulas we
previously considered derivable.
15
Apart from the stage-dependent derivability relation, we also need a stageindependent, ‘final’ notion of derivability in order to define a syntactic consequence relation for our logics.
Definition 11. A is finally derived from Γ at line l of a proof at a finite stage
s iff (i) A is the second element of line l, (ii) line l is not marked at stage s,
and (iii) every extension of the proof in which line l is marked can be further
extended in such a way that line l is unmarked.17
Definition 12. Γ ⊢MIO∩1 A (A is finally MIO∩
1 -derivable from Γ) iff A is
-proof
from
Γ.
finally derived at a line of an MIO∩
1
The disjunction (a → c) ∨ (b → d) is the only minimal Dab-consequence
of Γ4 , i.e. the only minimal Dab-formula that is CMIO1 -derivable from Γ4 .
Consequently, the set Φ11 (Γ4 ) = {{ (a → c)}, { (b → d)}} will remain stable in
any extension of the proof, such that lines 10 and 11 remain unmarked whatever
happens next. By Definition 11, out(c ∨ d) is finally derived from Γ4 . By
Definition 12, Γ4 ⊢MIO∩1 out(c ∨ d). It is safely left to the reader to check that
Γ4 6⊢MIO∩1 outc, Γ4 6⊢MIO∩1 outd, and Γ4 6⊢MIO∩1 out(c ∧ d).
The normal selections strategy. We saw how the minimal abnormality
strategy is rather skeptical in its treatment of constrained outputs. In our
example we were able to derive the minimal Dab-consequence (a → c) ∨ (b →
d). This means that both conditionals together are not CMIO1 -consistent with
our given inputs and constraints. Since the disjunction is minimal, this shows
that both a → c and b → d taken individually are consistent with the inputs and
constraints. Nevertheless, a line whose formulas is only derived on the condition
{ (a → c)} or only on the condition { (b → d)} is marked according to minimal
abnormality (e.g., lines 6 and 7). This motivates another approach according
to which an assumption is considered as admissible in case its associated set of
conditionals is consistent with the given inputs and constraints. That is to say:
where ∆ = { (A1 → B1 ), . . . , (An → Bn )} is the condition of the line l, it is
marked only if Dab(∆) is derived on the empty condition since this expresses
that the set of conditionals {(A1 → B1 ), . . . , (An → Bn )} is not consistent with
the given input and constraints.
Let us spell out this more credulous alternative by means of the lower limit
logic CMIO1 . We now use the so-called normal selections strategy. The AL
that has CMIO1 as its LLL, Ω (as defined above) as its set of abnormalities,
∪
and normal selections as its strategy is called MIO∪
1 . Thus, MIO1 differs from
∩
MIO1 only in its use of a different strategy and, hence, a different marking
definition.
The marking definition for the normal selections strategy is straightforward:
Definition 13. Line l is marked at stage s iff, where ∆ is the condition of line
l, Dab(∆) has been derived on the empty condition at stage s.
17 Definition 11 has a game-theoretic flavor to it. In [3], this definition is interpreted as a
two-player game in which the proponent has a winning strategy in case she has a reply to
every counterargument by her opponent.
16
We consider again the proof from Γ4 , but this time mark lines according to
the normal selections strategy.
1
2
3
4
5
6
7
8
9
10
11
•(a → c)
•(b → d)
ina
inb
con¬(c ∧ d)
outc
outd
out(c ∧ d)
(a → c) ∨ (b → d)
out(c ∨ d)
out(c ∨ d)
PREM
PREM
PREM
PREM
PREM
1,3; RC
2,4; RC
6,7; RU
1-5; RC
6; RU
7; RU
∅
∅
∅
∅
∅
{
{
{
∅
{
{
(a → c)}
(b → d)}
(a → c), (b → d)}X9
(a → c)}
(b → d)}
Lines 6, 7, 10 and 11 remain unmarked in view of Definition 13, as their conditions have not been derived in the proof. Line 8 is marked since Dab({ (a →
c), (b → d)}) is derived at line 9.
The stage-dependent and stage-independent criteria for derivability (Defini18
tions 10 and 11) are equally applicable to MIO∪
A syntactic consequence
1.
∪
relation for MIO1 is readily defined by adjusting Definition 12 as follows:
Definition 14. Γ ⊢MIO∪1 A (A is finally MIO∪
1 -derivable from Γ) iff A is
finally derived at a line of an MIO∪
-proof
from
Γ.
1
Since Γ4 6⊢CMIO1 (a → c) and Γ4 6⊢CMIO1 (b → d), there is no possible
extension of our proof in which any of lines 6, 7, 10 and 11 are marked. By
Definition 14, Γ4 ⊢MIO∪1 outc, Γ4 ⊢MIO∪1 outd, Γ4 ⊢MIO∪1 out(c ∨ d), and
Γ4 6⊢MIO∪1 out(c ∧ d).
8
Adaptive characterizations of constrained I/Ologics
Generic definition We now move to a more abstract level, defining the adaptive logics that characterize the constrained I/O-functions from Section 4.
Definition 15. Where † ∈ {∩, ∪}, MIO†R is the AL defined from
1. The lower limit logic CMIOR (see Definition 7)
2. The set of abnormalities Ω = { (A → B) | A, B ∈ W}
3. The strategy:
◦ minimal abnormality if † = ∩
◦ normal selections if † = ∪
18 In fact, the definition for final derivability can be simplified since a line that is once marked
will not be unmarked at any later stage of the proof according to the normal selections strategy.
Hence, we can simply define: A is finally derived on some unmarked line l at stage s if the
proof cannot be extended in such a way that line l is marked.
17
By adjusting the subscripts in the definitions of the generic inference rules
from the previous section, we readily obtain a proof theory for MIO†R . By
using the appropriate marking definition (see Definitions 9 and 13) and applying
Definitions 11 and 12, we obtain a consequence relation.
The following is proven in Appendix C:
Theorem 3. Where † ∈ {∩, ∪}, and A is CL-consistent:
ΓG,A,C ⊢MIO† outA iff A ∈ out†R (G, A, C)
R
Similar as has been remarked in the context of Theorem 2, our representation
theorem 3 also does not cover the border case where A is inconsistent. As
explained before, such cases are trivialized by our logics whereas (in case (ID)
∪
19
is not in R) neither out∩
R (G, A, C) nor outR (G, A, C) need be trivial.
Proof outline In the remainder of this section, we broadly explain how we
prove the equivalence set out in Theorem 3 between the adaptive consequence
relations on the one hand, and the constrained I/O-operations on the other.
The reader is kindly referred to Appendix C.3 for more details.
Let us first deal with the special case in which there are no maxfamilies.
Lemma 1. Where outR (∅, A) ∪ C is CL-inconsistent, A is CL-consistent, and
G is a set of I/O-pairs: ΓG,A,C ⊢CMIOR ⊥ and hence ΓG,A,C ⊢MIO∩R ⊥ and
ΓG,A,C ⊢MIO∪R ⊥.
Proof. Suppose outR (∅, A) ∪ C ⊢CLV ⊥ and A 6⊢CL ⊥. Hence there is a finite
V
C ′ ⊆ C such that outR (∅, A) ⊢CL ¬
C ′ . By Theorem 2, Γ∅,A ⊢MIOR out¬ C ′ .
V
C ′ and by (the contraposition of) (ROC),
Hence, also Γ∅,A ⊢V
CMIOR out¬
′
∅,A
¬con C . By the monotonicity of CMIOR , (1) ΓG,A,C ⊢CMIOR
Γ
⊢
R
VCMIO
′
¬conV C . Since C ′ ⊆ C and by simple KD-properties, also (2) ΓG,A,C ⊢CMIOR
con C ′ . By (1) and (2), ΓG,A,C ⊢CMIOR ⊥.
As a consequence, we know now that Theorem 3 holds whenever outR (∅, A)∪
C is CL-inconsistent. For the other case we need a more involved argument for
which it is useful to introduce some additional notation.
Notation. Let Θ ⊆f ∆ denote that Θ is a finite subset of ∆. Where ∆ ⊆f Ω,
we say Dab(∆) is a Dab-consequence of Γ iff Γ ⊢CMIOR Dab(∆). It is a minimal
Dab-consequence of Γ iff there is no ∆′ ⊂ ∆ such that Γ ⊢CMIOR Dab(∆′ ). Let
Σ(Γ) = {∆ | Dab(∆) is a minimal Dab-consequence of Γ}. Let Φ(Γ) be the set
of all minimal choice sets of Σ(Γ). Finally, where ∆ is a set of formulas of the
form A → B, let Ω(∆) = { (A → B) | A → B ∈ ∆}. Where G is a set of
I/O-pairs we will just write Ω(G) instead of Ω({A → B | (A, B) ∈ G}).
∩
Let us start with the operations out∩
R and logics MIOR . The first theorem
which we need has been established for all ALs with the minimal abnormality
strategy in [2]:
19 Take the simple example where G = C = ∅ in which case out∩ (G, A, C) = out∪ (G, A, C) =
R
R
CnCL (∅).
18
Theorem 4 ([2], Theorem 8). Γ ⊢MIO∩R A iff for all Θ ∈ Φ(Γ), there is a
∆ ⊆f Ω − Θ such that Γ ⊢CMIOR A ∨ Dab(∆).
For the specific case where Γ = ΓG,A,C for some G, A, C, we can further
strengthen Theorem 4 to Corollary 1 in view of the following lemma:
Lemma 2. If ΓG,A,C ⊢CMIOR A ∨ Dab(∆), then ΓG,A,C ⊢CMIOR A ∨ Dab(∆ ∩
Ω(G)).
Corollary 1. ΓG,A,C ⊢MIO∩R A iff for all Θ ∈ Φ ΓG,A,C , there is a ∆ ⊆f
Ω(G) − Θ such that Γ ⊢CMIOR A ∨ Dab(∆).
The second property which we need is the following:
Theorem 5. Where outR (∅, A) ∪ C and A are CL-consistent sets:
Φ ΓG,A,C = {Ω(G − H) | H ∈ maxfamily(G, A, C)}
The proof of this theorem is intricate – see Appendix C.2. For an example,
consider again Γ4 = {ina, inb, a → c, b → d, (a ∧ b) → e, con¬(c ∧ d)}, and the
corresponding sets A = {a, b}, G = {(a, c), (b, d), (a ∧ b, e)}, C = {¬(c ∧ d)}. As
we saw before, maxfamily(G, A, C) = {{(a, c), (a ∧ b, e)}, {(b, d), (a ∧ b, e)}}. On
the other hand, Φ(Γ4 ) = {{ (b → d)}, { (a → c}}. We leave it to the reader to
check that this conforms to Theorem 5.
The third ingredient for the meta-proof is the following theorem, which links
outR to CMIOR :
Theorem 6. Where H ⊆ G, and outR (H, A) ∪ C and A are CL-consistent
sets: A ∈ outR (H, A) iff there is a ∆ ⊆f Ω(H) such that ΓG,A,C ⊢CMIOR
outA ∨ Dab(∆).
With these properties at our disposal, the proof of Theorem 3 for the case
† = ∩ becomes relatively short. Suppose that A and outR (∅, A) ∪ C are CLconsistent. The following five properties are equivalent by Corollary 1 (items 1
and 2), Theorem 5 (items 2 and 3), Theorem 6 (item 3 and 4) and Definition 5
(item 4 and 5):
1. ΓG,A,C ⊢MIO∩R outA
2. for all Θ ∈ Φ(ΓG,A,C ), there is a ∆ ⊆f Ω(G) − Θ such that ΓG,A,C ⊢CMIOR
outA ∨ Dab(∆)
3. for all H ∈ maxfamily(G, A, C), there is a ∆ ⊆f Ω(H) such that ΓG,A,C ⊢CMIOR
outA ∨ Dab(∆)
4. for all H ∈ maxfamily(G, A, C), A ∈ outR (H, A)
5. A ∈ out∩
R (G, A, C)
For the full join constrained output functions, out∪
R , we can apply basically
the same reasoning, relying on the following variant of Theorem 4:
19
Theorem 7 ([30], Theorem 2.8.3). Γ ⊢MIO∪R A iff for some Θ ∈ Φ(Γ), there is
a ∆ ⊆f Ω − Θ such that Γ ⊢CMIOR A ∨ Dab(∆).
In view of this fact, it suffices to replace “for all” by “for some” and the
reference to Definition 5 with Definition 6 in the above proof, in order to obtain
a proof for the case where † = ∪.
9
Modularity and variants
Recall that all I/O-operations are obtained by varying two parameters: (i) the
way the maxfamily is used when generating output (join vs. meet of the outfamily), and (ii) the rules under which the set of generators G is closed. Both are
mirrored by a specific feature of our systems. The adaptive strategy provides
the counterpart of (i), whereas the rules mentioned in (ii) are translated literally
into inference rules for conditionals of the lower limit logic. Hence, structural
properties of the original system are mirrored by structural properties of the
system into which it is translated. This makes our translation procedure very
unifying and natural. As a result, one may alter these frameworks in various
straightforward ways, while preserving the representation theorems.
Below, we briefly discuss additional variations with respect to (i) and (ii),
showing how these are translated into our adaptive framework. We will indicate
why these variations seem interesting, but leave their full exploration for a later
occasion. Additional options for variation will be discussed in Section 11.
Free output Consider the following example: A = {a, b}, G = {(a, c), (b, d), (a∧
b, e)} and C = {¬(c ∧ d)}. As explained before, both e and c ∨ d are in
out∩
1 (A, G, C). However, there is a difference between both outputs in view
of the way they are generated from the two maxfamilies H = {(a, c), (a ∧ b, e)}
and H′ = {(b, d), (a ∧ b, e)}.
Using terminology from default logic, conclusions such as c ∨ d may be called
floating conclusions [15]. In current terms, they can be defined as those formulas
in out∩
R (A, G, C) which cannot be generated from the I/O-pairs that are shared
by every member of maxfamilyR (A, G, C). Note that c ∨ d can only be obtained
by applying the pairs (a, b), resp. (c, d), neither of which are in H ∩ H′ . In
contrast, e is obtained by the application of a conditional (a ∧ b, e) which is
contained in both H and H′ .
The status of floating conclusions has been the subject of vigorous debate.
Following [15], some have argued that we should accept them just as any other
non-monotonic consequence (see also [25]), whereas others have come up with
various examples in order to show they are problematic [12]. Consequently,
several non-monotonic logics have been modified in order to allow or disallow
the derivation of floating conclusions (e.g., [12, 24]).
It is not our aim to take a stance in this long-lasting debate. For present
purposes, note that there is a straightforward alternative to Definition 5 in view
20
of which floating conclusions are avoided:20
Definition 16 (Free output).
out⋓
R (A, G, C) =df outR
\
maxfamilyR (A, G, C), A
Our nomenclature echoes that from [26] where a similar trick is applied in
a more narrow setting.21 In the above example, we have e ∈ out⋓
1 (A, G, C) and
c ∨ d 6∈ out⋓
1 (A, G, C).
The operation of free output can be characterized in our adaptive framework
by using the so-called reliability strategy. Let us illustrate the basic idea behind
this strategy in terms of the above example. We extend the last proof from the
previous section, deriving oute from Γ5 = Γ4 ∪ {•((a ∧ b) → e)}:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
•(a → c)
•(b → d)
ina
inb
con¬(c ∧ d)
outc
outd
out(c ∧ d)
(a → c) ∨ (b → d)
out(c ∨ d)
out(c ∨ d)
•((a ∧ b) → e)
in(a ∧ b)
oute
PREM
PREM
PREM
PREM
PREM
1,3; RC
2,4; RC
6,7; RU
1-5; RC
6; RU
7; RU
PREM
3,4;RU
12,13; RC
∅
∅
∅
∅
∅
{
{
{
∅
{
{
∅
∅
{
(a → c)}X14
(b → d)}X14
(a → c), (b → d)}X14
(a → c)}X14
(b → d)}X14
((a ∧ b) → e)}
The idea behind marking for the reliability strategy is straightforward: a
line is marked at stage s iff some member of its condition occurs in a minimal
Dab-formula at stage s. Formally:
Definition 17. Where Dab(∆1 ), Dab(∆2 ), . . . are the minimal Dab-formulas
derived at stage s, Us (Γ) = ∆1 ∪∆2 ∪. . . is the set of formulas that are unreliable
at stage s.
Definition 18. Where ∆ is the condition of line l, line l is marked at stage s
iff ∆ ∩ Us (Γ) 6= ∅.
Using this strategy, not only lines 6–8 are marked at stage 14 (as was the case
with the minimal abnormality strategy), but also line 10 and line 11. However,
line 14 is not marked.
20 This proposal is completely analogous to the one from [12, Chapter 7], where a default
logic is proposed that invalidates floating conclusions.
21 More precisely, the Free Rescher-Manor consequence relation from [26] reduces to the
specific case where A = C = ∅ and all members of G are of the form (⊤, A). We leave the
verification of this claim to the interested reader.
21
We now move to the representation theorem for these variants. Let MIO⋓
R
be the AL defined from (i) the lower limit logic CMIOR , (ii) the set of abnormalities Ω = { (A → B) | A, B ∈ W}, and (iii) the strategy Reliability. As
before, the adaptive proofs are dynamic. So we need to apply Definitions 11 and
12 to obtain a stable consequence relation for the logics MIO⋓
R . The following
holds:22
A,G,C
⊢CMIO⋓R
Theorem 8. Where A is CL-consistent: A ∈ out⋓
R (A, G, C) iff Γ
outA.
Let us make one last remark, before leaving the matter. Recall that the
strategy only affects the marking definition, whereas the generic inference rules
are independent of it. Hence, simply by considering different marking definitions for an adaptive proof one can switch between the various strategies to
check whether (according to our insights at the present stage of the proof) some
conclusion is e.g. a member of the free constrained output, or whether it only
follows by the full meet constrained output.
Other rules for conditionals Given the existing variation in terms of rules
for I/O-pairs, one may also consider additional rule systems for conditionals.
Such systems can be obtained by skipping some of the rules or by adding others. Makinson and van der Torre make a similar remark in Section 9 of their
[17], where they discuss several other rules, e.g. contraposition (CONT), (full)
transitivity (T), and conditionalisation (COND):23
If (A, B), then (¬B, ¬A).
(CONT)
If (A, B) and (B, C), then (A, C).
If (A, B) then (⊤, A ⊃ B).
(T)
(COND)
Both in the unconstrained and the constrained case, our modal framework
can easily deal with the addition of such extra rules. That is, the representation
theorems mentioned in preceding sections can be shown to hold for arbitrary
R as long as R contains at least the rules (SI), (WO) and (AND) and only
rules of the form: “If A1 ⊢CL B1 , . . . , An ⊢CL Bn and (C1 , D1 ), . . . , (Cm , Dm ),
then (E, F )”. The rules in R are translated into rules for conditionals in the
associated lower limit logic CMIOR . Thus, we can characterize the operations
∪
⋓
out∩
R , outR and outR in terms of adaptive proof theories.
10
Expressive power
Inputs, conditionals, outputs and constraints are expressed at the object level
in our modal systems. As a consequence, we can allow for truth-functional
operations on these types of information. This in turn allows us to express
22 See
Appendix D for the proof.
that (T) follows from (CT) and (SI).
23 Note
22
several crucial aspects of reasoning with conditionals at the object level, which
in the original I/O-framework can only be spelled out at the meta-level. Let us
briefly mention some of these.
First, as shown in the preceding section, the enriched language allows us
to explicate the dynamic aspects of reasoning with conditionals in a natural
way. Note that, using boolean operators and the connective •, we can easily
express at the object-level that a given conditional is or is not activated, resp.
selected. Likewise, we can express that there is a conflict between some of the
triggered conditionals, in view of the constraints at hand – in our terminology,
this corresponds to the derivability of a Dab-formula from the premise set (see
Section 7).
Second, one can express certain bridging principles in the modal framework.
For instance, from the viewpoint of default logic, it is natural to assume that
any input A also generates a constraint A. Accordingly, one may extend our
logics with an axiom inA ⊃ conA. For similar reasons, one may include axioms
such as conA ⊃ inA or conA ⊃ outA. In each of these cases, the advantage is
clear: instead of having to “prepare” one’s premises (e.g. such that A ⊆ C), the
logic itself implements this presupposition.
Third, by allowing for boolean combinations of conditionals, we can model
cases of ‘imperfect information’. For instance, suppose a student does not know
whether, if he subscribes to a certain course (a), he will have to make an oral
exam (b), or rather a written exam (c). In that case, modeling the situation
by (a, b ∨ c) would be inappropriate — the fact that the student does not know
exactly what the exam will be like, does not imply that he will have the choice
between either type of exam. In our framework, we can formalize such a case
by (a → b) ∨ (a → c).
Now suppose our student knows moreover that, when taking one of the two
types of exams, he is obliged to bring his student-ID (d). However, he does not
remember exactly for which of the two types of exams this was obligatory. His
information can hence be modeled by the following premise set:
Γ = {(a → b) ∨ (a → c), (b → d) ∨ (c → d)}
Clearly, in such a situation one is not entitled to infer that the student
is obliged to bring his student-ID. However, if the information known by the
student is modeled by the I/O-pairs (a, b∨c), (b∨c, d), then (a, d) follows as soon
as the I/O-logic of our choice validates the rules (SI) and (CT). On the other
hand, for none of the logics MIOR presented in Section 3, we have Γ ⊢MIOR
a → d, which is as expected.
11
Outlook
Just like I/O-logics, our adaptive characterizations can easily be varied, enriched, and adjusted for different application contexts. This has already been
explicated for some straightforward variants in Section 9. Let us in this outlook
give some examples of how the presented framework can be enhanced.
23
Priorities In some applications it may be useful to introduce priorities among
the I/O-pairs. This idea is not new in the context of logics based on maximal
consistent subsets. For instance, in Brewka’s ‘preferred subtheories’-approach
[8] priorities among formulas are considered when the maximal consistent subsets are selected that form the basis for his consequence relations. A similar
procedure can be realized by a slight generalization of the framework presented
in this paper. Instead of modeling all given I/O-conditionals on par by means
of •, we encode priorities among them by preceding them with •i (i ∈ N) where
i indicates the priority we attach to the conditional. The abnormalities are then
presented by the set Ω = {•i (A → B) | i ≥ 1, A, B ∈ W}. Moreover, we can
make use of so-called lexicographic ALs [33, 32] that take care of the priorities
in a natural way.
Quantitative considerations In other applications also quantitative considerations may play a role: if we have two maxfamilies to choose from, but in
one (significantly) more I/O-pairs are ‘violated’, we may have good reasons to
choose the option to violate less norms. A format for ALs that is expressive
enough to embed also qualitative variants has been recently investigated in [30,
Chapter 5]. In view of these results it is easy to device variants of the logics
presented in this paper that implement qualitative considerations.
Inconsistent facts I/O-logics only deal with conflicts among the I/O-pairs
but are obviously not designed for applications where also the factual input may
be conflicting. This can easily be fixed in a way that is coherent with one of the
main mechanisms behind I/O-logics: namely to work with maximal consistent
families. The idea is simply to first form maximal consistent subsets of the
factual input and subsequently to apply to each of these sets the respective I/Ofunction. Both steps can straightforwardly be integrated in an AL framework
that generalizes the framework presented in this paper by means of combining
it with the techniques presented in [22].
Permissions
We didn’t discuss permissions in this paper. Let us sketch
two straightforward ways in which they can be modeled within our framework
without engaging in conceptual issues. More involved accounts and a more
detailed analysis are postponed for future work. One simple way to embed
them in our approach would be by modeling given permissions PA B (“if A
you’re allowed to B”) by •¬(A → ¬B). This mirrors the usual ‘interdefinability
principle’ according to which PA B is encoded by ¬OA ¬B. In order to consider
permissions when we defeasibly activate norms we need to generalize the set
of abnormalities to Ω = {•(A → B) ∧ ¬(A → B) | A, B ∈ W} ∪ {•¬(A →
B) ∧ (A → B) | A, B ∈ W}. Suppose we have Γ = {•(⊤ → ¬p), •(¬(q →
¬p)), •(q → r), •(¬(q → ¬s)), inq}. This generates two minimal choice sets:
{•(⊤ → ¬p) ∧ ¬(⊤ → ¬p)} and {•¬(q → ¬p) ∧ (q → ¬p)}. Hence, by minimal
abnormality we will get outr but not out¬p.
This approach is still limited since we do not yet derive ‘output-permissions’.
E.g., in our example above we would expect to derive the ‘output-permission’ s.
By slightly changing the way we express conditionals we can solve this obstacle:
instead of encoding a conditional obligation by A → B we may use A → outB.
24
Similarly, for a conditional permission we may use A → ¬out¬B (where we make
use of the interdefinability principle according to which an ‘output-permission’
P A is presented by ¬out¬A). In this approach the detachment principle has to
be adjusted: ⊢ (A → B) ∧ inA ⊃ B. Our example above is now encoded by
{•(⊤ → out¬p), •(q → ¬out¬p), •(q → outr), •(q → ¬out¬s), inq}. Now we get
for instance, outr and ¬out¬s via minimal abnormality and reliability.
Yet another more involved way to implement permissions is in terms of
‘derogations’ that regulate the applicability and range of I/O-pairs. This has
been spelled out by Stolpe in [28]. It will be part of future work to see whether
and how Stolpe’s approach and other proposals [16, 6] can be modeled within
our framework.24
Going predicative Using our framework opens the prospect of applying the
constrained I/O-mechanism to predicate logic: i.e., for instance to allow for
input of the form in(∀xP (x)) or conditionals of the form (∀P (x)) → (∃Q(x))
or ∀x(P (x) → Q(x)). The reason why the original (constrained) I/O-functions
are suboptimal for the explication of defeasible reasoning in this context is due
to the undecidability of predicate logic: there is no effective way to perform the
consistency check that is necessary to calculate the maxfamilies.25 The dynamic
proof theory of ALs comes in handy since it doesn’t force us to make consistency
checks ‘on the spot’. Instead, we can defeasibly assume that a set of conditionals
G,A,C
is consistent. E.g., in a CMIO∪
we may derive outA on the
R -proof from Γ
condition ∆ = { (B → C) | (B, C) ∈ H} where H ⊆ G. This means that
we derive outA on the assumption that H is consistent with the given input
and the given constraints (i.e., it is a subset of a maxfamily of hG, A, Ci). In
case this doesn’t hold we know that we will be able to derive Dab(∆) on the
empty condition eventually and hence we will be forced to mark the line. We
will explore this possibility further in future research.26
Complexity Let us close with another remark on future research that concerns cross-fertilization. The complexity of I/O-logics has —to the best of our
knowledge— not yet been investigated. The situation is different for ALs (e.g.,
[35, 23]). In future research we will investigate whether the complexity results
for ALs can be transferred to I/O-logics.
12
Conclusion
We reconstructed I/O-logics as modal adaptive logics, and proved this reconstruction to be equivalent to the original definition. Apart from the purely
technical interest in translating systems from one formal framework to another,
we argued that our characterization has some additional advantages.
24 One of the reasons why it is not straightforward to implement Stolpe’s approach within
our framework is that it makes use of a contraction operation.
25 A similar observation has been made by Horty [13, p. 50] in the context of his dyadic
enrichment of an older system by Van Fraassen [9] that is based on consistency considerations.
26 The merits of the dynamic proofs of ALs when a positive test for consistency is not
available has been pointed out before, e.g., by Batens in the context of ALs for inductive
generalizations [1].
25
First, our logics come with a proof theory that mirrors various types of
defeasible reasoning. Both of Pollock’s notions of synchronic and diachronic
defeasibility are modeled in adaptive proofs. Moreover, the proof theory can
deal with the case-specific retraction of conclusions in the light of new input
(Section 5).
Second, making use of techniques from the adaptive logics framework, we
obtain new and interesting variations of the original I/O-functions. This can be
achieved by varying either the adaptive strategy or by varying the rules of the
lower limit logic (Section 9).
Third, our reconstruction allows for truth-functional operations on inputs,
conditionals, outputs and constraints. As a consequence, the resulting systems
are more expressive (Section 10).
References
[1] Diderik Batens. The need for adaptive logics in epistemology. In Logic, epistemology, and the unity of science, pages 459–485. Springer, 2004.
[2] Diderik Batens. A universal logic approach to adaptive logics. Logica Universalis,
1:221–242, 2007.
[3] Diderik Batens. Towards a dialogic interpretation of dynamic proofs. In Cédric
Dégremont, Laurent Keiff, and Helge Rückert, editors, Dialogues, Logics and
Other Strange Things. Essays in Honour of Shahid Rahman, pages 27–51. College
Publications, London, 2009.
[4] Mathieu Beirlaen and Christian Straßer. Nonmonotonic reasoning with normative
conflicts in multi-agent deontic logic. Journal of Logic and Computation, 2012.
doi:10.1093/logcom/exs059.
[5] Mathieu Beirlaen, Christian Straßer, and Joke Meheus. An inconsistency-adaptive
deontic logic for normative conflicts. Journal of Philosophical Logic, 42(2):285–
315, 2013.
[6] Guido Boella and Leendert van der Torre. Permissions and obligations in hierarchical normative systems. In Proceedings of the 9th international conference on
Artificial intelligence and law, pages 109–118. ACM, 2003.
[7] Guido Boella and Leendert van der Torre. Institutions with a hierarchy of authorities in distributed dynamic environments. Artifical Intelligence and Law,
16:53–71, 2008.
[8] Gerhard Brewka. Preferred subtheories: an extended logical framework for default
reasoning. In IJCAI 1989, pages 1043–1048, 1989.
[9] Bas C. Van Fraassen. Values and the heart’s command. Journal of Philosophy,
70:5–19, 1973.
[10] Lou Goble. Deontic logic (adapted) for normative conflicts. Logic Journal of
IGPL, 2013. doi:10.1093/jigpal/jzt022.
[11] Jörg Hansen, Gabriella Pigozzi, and Leon van der Torre. Ten philosophical problems in deontic logic. In Guido Boella, Leon van der Torre, and Harko Verhagen, editors, Normative Multi-agent Systems, number 07122 in Dagstuhl Seminar Proceedings, Dagstuhl, Germany, 2007. Internationales Begegnungs- und
Forschungszentrum für Informatik (IBFI), Schloss Dagstuhl, Germany.
26
[12] John Horty. Reasons as Defaults. Oxford University Press, 2012.
[13] John F Horty. Moral dilemmas and nonmonotonic logic. Journal of philosophical
logic, 23(1):35–65, 1994.
[14] Sten Lindström. A semantic approach to nonmonotonic reasoning: inference
operations and choice. Uppsala Prints and Reprints in Philosophy, 10, 1994.
[15] David Makinson and Karl Schlechta. Floating conclusions and zombie paths: two
deep difficulties in the “directly skeptical” approach to defeasible inheritance nets.
Artificial Intelligence, 48:199–209, 1991.
[16] David Makinson and Leendert van der Torre. Permission from an input/output
perspective. Journal of Philosophical Logic, 32(4):391–416, 2003.
[17] David Makinson and Leon van der Torre. Input/output logics. Journal of Philosophical Logic, 29:383–408, 2000.
[18] David Makinson and Leon van der Torre. Constraints for input/output logics.
Journal of Philosophical Logic, 30:155–185, 2001.
[19] David Makinson and Leon van der Torre. Permission from an input/output perspective. Journal of Philosophical Logic, 32(4):391–416, 2003.
[20] David Makinson and Leon van der Torre. What is input/output logic? input/output logic, constraints, permissions. In Guido Boella, Leon van der Torre,
and Harko Verhagen, editors, Normative Multi-agent Systems, number 07122
in Dagstuhl Seminar Proceedings, Dagstuhl, Germany, 2007. Internationales
Begegnungs- und Forschungszentrum für Informatik (IBFI), Schloss Dagstuhl,
Germany.
[21] Joke Meheus, Mathieu Beirlaen, and Frederik Van De Putte. Avoiding deontic
explosion by contextually restricting aggregation. In Guido Governatori and Giovanni Sartor, editors, DEON (10th International Conference on Deontic Logic in
Computer Science), volume 6181 of Lecture Notes in Artificial Intelligence, pages
148–165. Springer, 2010.
[22] Joke Meheus, Christian Straßer, and Peter Verdée. Which style of reasoning to
choose in the face of conflicting information? Forthcoming in the Journal of Logic
and Computation.
[23] Sergei P. Odintsov and Stanislav O. Speranski. Computability issues for adaptive
logics in expanded standard format. Forthcoming.
[24] John. Pollock. Defeasible reasoning. In Jonathan E. Adler and Lance J. Rips,
editors, Reasoning. Studies of Human Inference and Its Foundations, pages 451–
470. Cambridge University Press, 2008.
[25] Henry Prakken. Intuitions and the modelling of defeasible reasoning: some case
studies. In Salem Benferhat and Enrico Giunchiglia, editors, NMR, pages 91–102,
2002.
[26] Nicholas Rescher and Ruth Manor. On inference from inconsistent premises.
Theory and Decision, 1:179–217, 1970.
[27] Lutz Schröder and Dirk Pattinson. Rank-1 modal logics are coalgebraic. Journal
of Logic and Computation, 5(20):1113–1147, December 2010.
[28] Audun Stolpe. A theory of permission based on the notion of derogation. Journal
of Applied Logic, 8(1):97–113, 2010.
27
[29] Christian Straßer. A deontic logic framework allowing for factual detachment.
Journal of Applied Logic, 9(1):61–80, 2011.
[30] Christian Straßer. Adaptive Logic and Defeasible Reasoning. Applications in Argumentation, Normative Reasoning and Default Reasoning. Springer, 201x.
[31] Christian Straßer, Mathieu Beirlaen, and Joke Meheus. Tolerating deontic conflicts by adaptively restricting inheritance. Logique & Analyse, (219):477–506,
2012.
[32] Frederik Van De Putte and Christian Straßer. Extending the standard format of
adaptive logics to the prioritized case. Logique et Analyze, 55(220):601–641, 2012.
[33] Frederik Van De Putte and Christian Straßer. A logic for prioritized normative
reasoning. Journal of Logic and Computation, 23(3):563–583, 2013. In print. DOI:
10.1093/logcom/exs008.
[34] Frederik Van De Putte and Peter Verdée. The dynamics of relevance: Adaptive
belief revision. Synthese, 187(1):1–42, 2012.
[35] Peter Verdée. Adaptive logics using the minimal abnormality strategy are π11 complex. Synthese, 167:93–104, 2009.
28
APPENDIX
General Remark: In this appendix we assume a fixed (arbitrary) set of
rules R for I/O-pairs, which includes (Z), (SI), (WO) and (OR). Hence we do not
restrict ourselves to the rules that are considered in Section 2. The considered
rules have the form: If A1 ⊢CL B1 , . . . , An ⊢CL Bn and (C1 , D1 ), . . . , (Cm , Dm ),
then (E, F ).
We use the following notations. Where Γ ⊆ W and π ∈ {in, out, con},
Γπ = {πA | A ∈ Γ}. Where G is a set of I/O-pairs, G → = {A → B | (A, B) ∈ G}.
A
Proof of Theorem 2
The proofs of the following two facts are easy and left to the reader.
Fact 3. (A, B) ∈ GR iff G → ⊢MIOR A → B.
Fact 4. A ∈ CnCL (A) iff Ain ⊢MIOR inA.
Theorem 9. If A ∈ outR (G, A), then ΓG,A ⊢MIOR outA.
Proof. Suppose A ∈ outR (G, A). So there is a B ∈ CnCL (A) such that (B, A) ∈
GR . By Facts 3 and 4 respectively, G → ⊢MIOR B → A and Ain ⊢MIOR inB. By
the monotonicity of MIOR and (DET), ΓG,A ⊢MIOR outA.
Fact 5. If (ID) is in R and outR (G, A) is CL-consistent, then A is CLconsistent.
Theorem 10. Where A is CL-consistent or (ID) is in R: If ΓA,G ⊢MIOR outA,
then A ∈ outR (G, A).
Proof. The case in which outR (G, A) is CL-inconsistent is straightforward. Suppose thus that outR (G, A) is CL-consistent. Let A be CL-consistent or (ID)
be in R. In any case, by Fact 5, A is consistent. Suppose A 6∈ outR (G, A). Let
Γ = Γ1 ∪ Γ2 ∪ Γ3 , where
Γ1 = {inB | B ∈ CnCL (A)} ∪ {¬inC | C ∈ W − CnCL (A)}
Γ2 = {B → C | (B, C) ∈ GR } ∪ {¬(D → E) | (D, E) ∈ W × W − GR }
Γ3 = {outB | B ∈ outR (G, A)} ∪ {¬outC | C ∈ W − outR (G, A)}
Note that Γ is CL-consistent.27 Let Γ′ be a maximal CL-consistent extension of Γ. Note that (†) B ⊃ C ∈ Γ′ iff B ∈
/ Γ′ or C ∈ Γ′ and (‡) ¬B ∈ Γ′ iff
′
′
B 6∈ Γ . We now prove for all D ∈ W :
Γ′ ⊢MIOR D iff D ∈ Γ′
(1)
27 This is an immediate consequence of the construction and the fact that in, out, and →
have no meaning in CL. At this point in the proof, it may well be that Γ is inconsistent w.r.t.
MIOR . However, we will later show why this is not the case.
29
It suffices to show that (i) whenever D is a MIOR -axiom, then D ∈ Γ′ and (ii)
Γ′ is closed under all the MIOR -rules.
Ad (i). For the CL-axioms, this follows immediately in view of the construction. So we move on to the KD-axioms for in.
Ad (Kin). in(B ⊃ C) ⊃ (inB ⊃ inC) ∈ Γ′ iff [by (†)] in(B ⊃ C) 6∈ Γ′ or
inB ∈
/ Γ′ or inC ∈ Γ′ iff [by (‡)] ¬in(B ⊃ C) ∈ Γ′ or ¬inB ∈ Γ′ or inC ∈ Γ′ iff
[by the construction] B ⊃ C 6∈ CnCL (A) or B 6∈ CnCL (A) or C ∈ CnCL (A).
The latter holds by modus ponens.
Ad (Din). inB ⊃ ¬in¬B ∈ Γ′ iff [by (†)] inB ∈
/ Γ′ or ¬in¬B ∈ Γ′ iff
′
′
[by (‡)] ¬inB ∈ Γ or ¬in¬B ∈ Γ iff [by the construction] B 6∈ CnCL (A) or
¬B 6∈ CnCL (A). The latter holds by the consistency of A.
The reasoning for out is similar. For (Kout), we rely on Theorem 1. For
(Dout), we rely on the consistency of outR (G, A), which follows from Theorem
1 and the supposition that A 6∈ outR (G, A).
For (DET), we have: inB ⊃ ((B → C)) ⊃ outC) ∈ Γ′ iff [by (†)] inB ∈
/ Γ′
′
′
or B → C ∈
/ Γ or outC ∈ Γ iff [by (‡) and the construction] B 6∈ CnCL (A) or
(B, C) 6∈ GR or C ∈ outR (G, A). The latter holds by Definition 1.
Ad (ii). Immediate in view of the construction.
By (1), Γ′ 6⊢MIOR ⊥. Hence since ¬outA ∈ Γ′ , Γ′ 6⊢MIOR outA. Note that
G,A
Γ
⊆ Γ′ . By the monotonicity of MIOR , ΓG,A 6⊢MIOR outA.
Theorem 2 is an immediate consequence of Theorems 9 and 10.
B
Some Properties of CMIOR
In this section, we establish some properties of the systems CMIOR which will
be helpful for establishing the representation theorems in the next two sections.
The properties and their proofs are strongly linked to those from the previous
section.
Notation: To avoid clutter, we shall use ⊢ to abbreviate ⊢CMIOR in the
remainder of this appendix.
By Theorem 9 and since CMIOR is a monotonic extension of MIOR , we
have:
Corollary 2. Where C ⊆ W: If A ∈ outR (G, A), then ΓG,A ∪ C con ⊢ outA.
Fact 6. C ∈ CnCL (C) iff C con ⊢ conC.
Fact 7. Where Γ, Γ′ ⊆ W: Γ ∪ Γ′ ⊢CL ⊥ iff there is an A ∈ CnCL (Γ) such that
¬A ∈ CnCL (Γ′ ).
Theorem 11. Where A is consistent or (ID) is in R: outR (G, A) ∪ C ⊢CL ⊥
iff ΓG,A ∪ C con ⊢ ⊥.
Proof. (⇒) Suppose outR (G, A) ∪ C ⊢CL ⊥. By Fact 7 there is an A ∈ W such
that (i) A ∈ CnCL (outR (G, A)) and (ii) ¬A ∈ CnCL (C). By (i) and Theorem
1, A ∈ outR (G, A). By Corollary 2, ΓG,A ∪ C con ⊢ outA. By (ii), Fact 6 and the
30
monotonicity of CMIOR , ΓG,A ∪ C con ⊢ con¬A. By (ROC) and double negation
elimination, ΓG,A ∪ C con ⊢ ¬outA.
(⇐) Suppose the antecedent is false and hence outR (G, A) ∪ C 6⊢CL ⊥. Let
Γ = Γ1 ∪ Γ2 ∪ Γ3 ∪ Γ4 , where Γ1 , Γ2 and Γ3 are defined as in the proof for
Theorem 10, and Γ4 = {conA | A ∈ CnCL (C)} ∪ {¬conB | B ∈ W − CnCL (C)}.
Note that Γ is CL-consistent. Let Γ′ be a maximal CL-consistent extension
of Γ. We have:
Γ′ ⊢ A iff A ∈ Γ′
(2)
This can be shown by an analogous argument to the argument for (1) in Theorem
10. For (Kcon) and (Dcon) we proceed analogously as for (Kin) and (Din).
For the axiom (ROC), we have: conA ⊃ ¬out¬A iff [by (†)] conA ∈
/ Γ′ or
′
¬out¬A ∈ Γ iff [by (‡) and the construction] A 6∈ CnCL (C) or ¬A 6∈ outR (G, A).
The latter holds in view of the supposition and Fact 7.
By the construction, ΓG,A ∪ C con ⊆ Γ′ . By the monotonicity of CMIOR , (2),
and the CMIOR -consistency of Γ′ , ΓG,A ∪ C con 6⊢ ⊥.
Theorem 12. Where outR (G, A) ∪ C is CL-consistent and (A is CL-consistent
or (ID) is in R): A ∈ outR (G, A) iff ΓG,A ∪ C con ⊢ outA.
Proof. (⇒) Immediate in view of Corollary 2. (⇐) Suppose A 6∈ outR (G, A).
Let Γ′ be constructed as in the proof of Theorem 11. By the construction
and the supposition, ¬outA ∈ Γ′ . By the consistency of Γ′ , outA 6∈ Γ′ . Since
ΓG,A ∪ C con ⊆ Γ′ , by the monotonicity of CMIOR , (2), ΓG,A ∪ C con 6⊢ outA.
C
Proof of Theorem 3
We refer to Section 8 for the outline of the proof. As shown there, it suffices to
prove Lemma 2, Theorem 5 and Theorem 6, which we shall do here.
Preliminaries Let W 6 • denote the set of all formulas in W c without occurrences of •, and W → = {A → B | A, B ∈ W}. Where ∆ ⊆ W → , let
∆• = {•A | A ∈ ∆}. Recall that Ω(∆) = { (A → B) | A → B ∈ ∆} and,
where G is a set of I/O-pairs, Ω(G) is an abbreviation for Ω(G → ). We will in
the remainder sometimes use the inverse function ℧, which is defined as follows:
where Θ ⊆ Ω, ℧(Θ) = {A → B | (A → B) ∈ Θ}.
C.1
Proof of Lemma 2
Since • is a dummy operator in CMIOR , we have:
Fact 8. Where Γ ⊆ W 6 • , ∆ ∪ {A} ⊆ W → and B ∈ W c : Γ ∪ ∆• ⊢ B ∨ •A iff
(Γ ∪ ∆• ⊢ B) or (A ∈ ∆).
Lemma 3. Where Γ ⊆ W 6 • , ∆ ⊆ W → , Θ ⊆ Ω and A ∈ W c : if Γ ∪ ∆• ⊢
A ∨ Dab(Θ), then Γ ∪ ∆• ⊢ A ∨ Dab(Θ ∩ Ω(∆)).
31
Proof. Suppose Γ ∪ ∆• ⊢ A ∨ Dab(Θ). Let Θ − Ω(∆) = {•A1 ∧ ¬A1 , . . . , •An ∧
¬An }. Hence, (⋆) Ai 6∈ ∆ where 1 ≤ i ≤ n.
By the supposition and CL, Γ ∪ ∆• ⊢ A ∨ Dab(Θ − {•A1 ∧ ¬A1 }) ∨ •A1 . By
(⋆) and Fact 8, Γ∪∆• ⊢ A∨Dab(Θ−{•A1 ∧¬A1 }). Repeating the same reasing
for A2 , . . ., and An , we can derive that Γ ∪ ∆• ⊢ A ∨ Dab(Θ − (Θ − Ω(∆))), and
hence Γ ∪ ∆• ⊢ A ∨ Dab(Θ ∩ Ω(∆)).
Lemma 2 is a special case of Lemma 3 where Γ = Ain ∪ C con and ∆ = G → .
C.2
Proof of Theorem 5
Since • is a dummy operator in CMIOR , we have:
Fact 9. Where Γ ∪ {A} ⊆ W 6 • and ∆ ⊆ W → : Γ ∪ ∆• ⊢ A iff Γ ⊢ A.
Lemma 4. Where Γ ⊆ W 6 • and ∆ ⊆ W → : (i) if Θ ∈ Σ(Γ ∪ ∆• ) and Γ 6⊢ ⊥,
then ℧(Θ) ⊆ ∆, (ii) if Θ ∈ Φ(Γ ∪ ∆• ) and Γ 6⊢ ⊥, then ℧(Θ) ⊆ ∆.
W
Proof. Ad (i). Suppose
the antecedent holds. HenceWΓ∪∆• ⊢ Θ. By addition,
W
Γ ∪ ∆• ⊢ ⊥ ∨ Θ. By Lemma 3, Γ ∪ ∆• ⊢ ⊥ ∨ (Θ ∩ Ω(∆)). Assume first
that Θ ∩ Ω(∆) = ∅. In that case Γ ∪ ∆• ⊢ ⊥ and hence byWFact 9 also Γ ⊢ ⊥,—
a contradiction. Thus,W Θ ∩ Ω(∆) 6= ∅ and Γ ∪ ∆• ⊢ (Θ ∪ Ω(∆)). Were
Θ − Ω(∆) ⊂ Θ then Θ would not be a minimal Dab-consequence. Thus,
Θ ∩ Ω(∆) = Θ and hence ℧(Θ) ⊆ ∆.
S Ad (ii). This follows immediately by (i)
since for each Θ ∈ Φ(Γ ∪ ∆• ), Θ ⊆ Σ(Γ ∪ ∆• ).
Theorem 13. Where Γ ⊆ W 6 • , ∆ ⊆ W → , Θ ⊆ Ω and Γ 6⊢ ⊥: Θ is a choice set
of Σ(Γ ∪ ∆• ) iff Γ ∪ (∆ − ℧(Θ)) 0 ⊥.
Proof. (⇒) Suppose Γ ∪ (∆ − ℧(Θ)) ⊢ ⊥. By compactness and
V the deduction
theorem, there
is
a
finite
Λ
⊆
∆
−
℧(Θ)
such
that
Γ
⊢
¬
Λ. Hence also
V
Γ ∪ ∆• ⊢ ¬W Λ. Note that, since Λ ⊆ ∆, also Γ ∪ ∆• ⊢ •A for all A ∈ Λ. Hence,
Γ ∪ ∆• ⊢ Ω(Λ). It follows that there is a Λ′ ⊆ Λ with Ω(Λ′ ) ∈ Σ(Γ ∪ ∆• ).
Note that also Ω(Λ′ ) ∩ Θ = ∅. Hence Θ is not a choice set of Σ(Γ ∪ ∆• ).
(⇐) Suppose Θ is not a choice set of Σ(Γ ∪ ∆• ). Let Λ ∈ Σ(Γ ∪ ∆• ) be such
that Θ ∩ Λ = ∅. By Lemma 4, we have two cases: (1) Γ ⊢ ⊥, or (2) ℧(Λ) ⊆ ∆.
Ad (1): By the monotonicity of CMIORW
, Γ ∪ (∆ − ℧(Θ)) ⊢ ⊥.
W
Ad (2): Since Λ ∈ Σ(Γ ∪ W
∆• ), Γ ∪ ∆• ⊢ Λ. Hence also Γ ∪ ∆• ⊢ {¬A |
A ∈ ℧(Λ)}. By Fact 9, Γ ⊢ {¬A | A ∈ ℧(Λ)}, and hence (⋆) Γ ∪ ℧(Λ) ⊢ ⊥.
Since Λ ∩ Θ = ∅, ℧(Λ) ⊆ ∆ − ℧(Θ). Hence by (⋆) and the monotonicity of
CMIOR , Γ ∪ (∆ − ℧(Θ)) ⊢ ⊥.
Definition 19. Where Γ ⊆ W 6 • and ∆ ⊆ W → , M(Γ, ∆) is the set of all ⊂maximal ∆′ ⊆ ∆ such that Γ ∪ ∆′ 0 ⊥.
Theorem 14. Where Γ ⊆ W 6 • , ∆ ⊆ W → , Θ ⊆ Ω and Γ 6⊢ ⊥, Θ ∈ Φ(Γ ∪ ∆• )
iff ∆ − ℧(Θ) ∈ M(Γ, ∆).
32
Proof. (⇒) Suppose Θ ∈ Φ(Γ ∪ ∆• ). Assume ∆ − ℧(Θ) ∈
/ M(Γ, ∆). Hence, by
Theorem 13 there is a Λ ⊆ ∆ such that Λ ⊃ ∆ − ℧(Θ) and Γ ∪ Λ 0 ⊥. Let
Θ′ = Ω(∆ − Λ). It follows that Λ = ∆ − ℧(Θ′ ) and (1) Θ′ ⊂ Θ. By the right-left
direction of Theorem 13, (2) Θ′ is a choice set of Σ(Γ ∪ ∆• ). By (1) and (2),
Θ∈
/ Φ(Γ ∪ ∆• ),—a contradiction.
(⇐) Suppose ∆ − ℧(Θ) ∈ M(Γ, ∆). Hence Γ ∪ (∆ − ℧(Θ)) 0 ⊥. By the
right-left direction of Theorem 13, Θ is a choice set of Σ(Γ ∪ ∆• ). Assume
that Θ is not a minimal choice set of Σ(Γ ∪ ∆• ). Let Θ′ ⊂ Θ be a choice set of
Σ(Γ∪∆• ). By the left-right direction of Theorem 13, Γ∪(∆−℧(Θ′ )) 0 ⊥. Note
that ∆−℧(Θ) ⊂ ∆−℧(Θ′ ). Hence ∆−℧(Θ) 6∈ M(Γ, ∆) — a contradiction.
We can now prove Theorem 5. Suppose outR (∅, A) ∪ C and A are CLconsistent sets. By Theorem 11 (where G = ∅), Ain ∪C con 0 ⊥. Hence we can rely
on Theorem 14, letting Γ = Ain ∪ C con and ∆ = G → , such that ΓG,A,C = Γ ∪ ∆• .
(⊆) Suppose Θ ∈ Φ(ΓG,A,C ). By the left-right direction of Theorem 14,
→
G − ℧(Θ) ∈ M(Ain ∪ C con , G → ). In other words, H =df {(A, B) | A → B ∈
G → − ℧(Θ)} is a ⊂-maximal subset of G such that H→ ∪ Ain ∪ C con 0 ⊥. By
Theorem 11, H is a ⊂-maximal subset of G such that outR (H, A) ∪ C 0CL ⊥.
By Definition 4, H ∈ maxfamily(G, A, C). By Lemma 4 (where Γ = Ain ∪ C con
and ∆ = G → ) we immediately get Θ ⊆ Ω(G) and hence Θ = Ω(G − H).
(⊇) Let H ∈ maxfamily(G, A, C). By Definition 4, H is a ⊂-maximal subset
of G such that outR (G, A) ∪ C 0CL ⊥. By Theorem 11, (⋆) H→ is a ⊂-maximal
subset of G → such that H→ ∪ Ain ∪ C con 0 ⊥. Let Θ = Ω(G − H). Hence, H→ =
G → − ℧(Θ). By (⋆) and the right-left direction of Theorem 14, Θ ∈ Φ ΓG,A,C .
C.3
Proof of Theorem 6
→
Lemma 5. Where Γ ⊆ W 6 • , ∆ ⊆ WW
, and ∆′ ⊆ ∆: Γ ∪ ∆′ ⊢ A iff there is a
Θ ⊆f Ω(∆′ ) such that Γ ∪ ∆• ⊢ A ∨ Θ.
Proof. (⇒) Suppose Γ ∪ ∆′ ⊢ A. By the compactness of CMIOR , there is a
′′
∆′′ ⊆f ∆′ such that Γ ∪ ∆′′ ⊢ A.
), whence Θ ⊆f Ω(∆′ ). By
W Let Θ = Ω(∆
′′
the deduction theorem, Γ ⊢ A ∨ {¬B | B ∈W∆ }. Note that for all B ∈ ∆′′ ,
•B ∈ ∆• since ∆′′ ⊆ ∆. Hence Γ W
∪ ∆• ⊢ A ∨ Θ.
•
′
•
(⇐)
W Suppose Γ ∪ ∆ ⊢ A ∨ Θ, where ΘW⊆f Ω(∆ ). Hence, Γ ∪ ∆ ⊢
A ∨ {¬B | B ∈ ℧(Θ)}. By Fact 9, Γ ⊢ A ∨ {¬B | B ∈ ℧(Θ)}. Hence by
the monotonicity of CMIOR and disjunctive syllogism, Γ ∪ ℧(Θ) ⊢ A. Since
℧(Θ) ⊆ ∆′ and by the monotonicity of CMIOR , Γ ∪ ∆′ ⊢ A.
Letting Γ = Ain ∪ C con , ∆ = G → , ∆′ = H→ , we get by Lemma 5:
Corollary 3. Where H ⊆ G, ΓH,A ∪ C con ⊢ outA iff there is a Θ ⊆f Ω(H) such
that ΓG,A,C ⊢ outA ∨ Dab(Θ).
Theorem 6 follows directly by Theorem 12 and Corollary 3. To see this,
suppose outR (H, A) and A are CL-consistent sets where H ⊆ G. We have:
A ∈ outR (H, A) iff [by Theorem 12] ΓH,A ∪ C con ⊢ outA iff [by Corollary 3] there
is a Θ ⊆f Ω(H) such that ΓG,A,C ⊢ outA ∨ Dab(Θ).
33
D
Proof of Theorem 8
The following property is known to hold for adaptive logics with the reliability
strategy.28
S
Theorem 15. Γ ⊢MIO⋓R A iff there is a ∆ ⊆f Ω − Φ(Γ) such that Γ ⊢CMIOR
A ∨ Dab(∆).
By Lemma 2, we have:
Corollary 4. Γ ⊢MIO⋓R A iff there is a ∆ ⊆f Ω(G)−
A ∨ Dab(∆).
S
Φ(Γ) such that Γ ⊢CMIOR
From Theorem 5, we can derive:
Theorem 16. Where outR (∅, A) ∪ C and A are CL-consistent sets:
[
\
Φ ΓG,A,C = Ω G −
maxfamily(G, A, C)
We now prove Theorem 8.
For special case in which there are no maxfamilies we can use Lemma 1. If
outR (∅, A) ∪ C is CL-inconsistent then ΓG,A,C ⊢ ⊥ and hence ΓG,A,C ⊢CMIO⋓R ⊥.
Hence, Theorem 8 holds for the case in which outR (∅, A) ∪ C is CL-inconsistent.
Suppose now that outR (∅, A)∪C and A are CL-consistent sets. The following
four properties are equivalent by Corollary 4 (item 1 and 2), Theorem 16 (items
2 and 3), and Theorem 6 (items 3 and 4):
1. ΓG,A,C ⊢MIO⋓R outA
S
2. there is a ∆ ⊆f Ω(G) − Φ(Γ) such that ΓG,A,C ⊢CMIOR outA ∨ Dab(∆).
T
3. there is a ∆ ⊆f Ω( maxfamily(G, A, C)) such that ΓG,A,C ⊢CMIOR outA ∨
Dab(∆).
T
4. A ∈ outR ( maxfamily(G, A, C), A).
E
Semantics
In our paper we focus on syntactic considerations since our main concern is with
the dynamic proof theory for I/O-logics. Nevertheless, there is also a semantic
side to the presented systems. In this section we shall give a brief overview.
We immediately demonstrate the more involved semantics for CMIOR : the
semantics for MIOR can be easily derived from it by the interested reader.
A CMIOR -model is a tuple hW, Rin , Rout , Rcon , N , @, v, v• i where @ ∈ W is
the actual world; Rin , Rout and Rcon are accessibility relations over {@} × W ;
and N is a neighborhood associating @ with a set in ℘(W × W ). We write R@
for {w ∈ W | R@w} where R ∈ {Rin , Rout , Rcon }, v is an assignment function
28 It
follows immediately from Theorems 6 and 11.5 in [2].
34
that associates each atom with a set of worlds, while v• is an assignment that
associates @ with a subset of W → .
Truth at a world is defined as usual for the Boolean connectives: where
w ∈ W,
where p is an atom : M, w |= p iff w ∈ v(p)
(MA)
M, w |= A ∨ B iff M, w |= A or M, w |= B
M, w |= ¬A iff M, w 6|= A
(M∨)
(M¬)
etc.
and as follows for the modal connectives (where π ∈ {in, out, con}) and •:
M, @ |= πA iff for all w ∈ Rπ @ : M, w |= A
M, @ |= A → B iff (|A|M , |B|M ) ∈ N
(MRπ )
(MN )
M, @ |= •(A → B) iff A → B ∈ v• (@)
(M•)
The rules in R can be translated in a straight-forward way to frame-conditions.
Some examples: for all X, Y, Z ⊆ W :
If Y ⊇ X and (Y, Z) ∈ N , then (X, Z) ∈ N
(F-SI)
If (X, Y ) ∈ N and (X, Z) ∈ N , then (X, Y ∩ Z) ∈ N
If Y ⊇ X and (Z, X) ∈ N , then (Z, Y ) ∈ N
If (X, Z) ∈ N and (Y, Z) ∈ N , then (X ∪ Y, Z) ∈ N
(F-AND)
(F-WO)
(F-OR)
If (X, Y ) ∈ N and (X ∩ Y, Z) ∈ N , then (X, Z) ∈ N
(X, X) ∈ N
(F-CT)
(F-ID)
Where F (R) is the translation of the rules in R into frame conditions, an Rframe is such that the conditions in F (R) are met, Rπ is non-empty (where
π ∈ {in, out, con}), and all X, Y ⊆ W :
If Rin @ ⊆ X and (X, Y ) ∈ N , then Rout @ ⊆ Y
R
con
@∩R
out
@ 6= ∅
(F-DET)
(F-CON)
We define as usual: M |= A iff M, @ |= A. We write MR (Γ) for the class of
all CMIOR -models of Γ. A semantic consequence relation is then defined by
Γ CMIOR A iff M |= A for all M ∈ MR (Γ).
Since in [27] it was shown that rank-1 model logics are sound and complete
with respect to their canonical neighborhood semantics, the following theorem
is an immediate consequence:
Theorem 17. Where Γ ∪ {A} ⊆ W c : Γ CMIOR A iff Γ ⊢CMIOR A.
The semantics for our adaptive systems are selection semantics: each strategy is associated with a specific selection function that selects a set of models
from the CMIOR -models of a given premise set. The idea is that the selection picks models that are ‘sufficiently normal’. This is determined in view of
35
the abnormal part of a model, i.e., the set of abnormalities the model verifies:
Ab(M ) = {A ∈ Ω | M |= A}. For a more detailed discussion on the intuitions
behind these semantics the reader is referred to e.g., [30, Chapter 2].
Definition 20. Where Γ ⊆ W c , let
′
′
◦ M∩
R (Γ) =df {M ∈ MR (Γ) | for all M ∈ MR (Γ), Ab(M ) 6⊂ Ab(M )}
S
◦ M⋓
Φ(Γ)}
R (Γ) =df {M ∈ MR (Γ) | Ab(M ) ⊆
′
′
∩
◦ M∪
R (Γ) =df {M ∈ MR (Γ) | Ab(M ) = Ab(M )} | M ∈ MR (Γ)
∩
⋓
Note that M∪
R (Γ) is a set of sets of models, unlike MR (Γ) and MR (Γ) which
are sets of models.
Semantic consequence relations for our adaptive systems are defined in view
of the semantic selections:
Definition 21. Where Γ ∪ {A} ⊆ W c ,
◦ Γ CMIO∩R A iff M |= A for all M ∈ M∩
R (Γ)
◦ Γ CMIO⋓R A iff M |= A for all M ∈ M⋓
R (Γ)
′
◦ Γ CMIO∪R A iff there is a M ∈ M∪
R (Γ) such that for all M ∈ M,
′
M |= A
The soundness and completeness for each adaptive systems is warranted by
the soundness and completeness of CMIOR and general properties of adaptive logics (see [2] for minimal abnormality and reliability and [30] for normal
selections). Hence,
⋓
∪
c
Theorem 18. Where L ∈ {CMIO∩
R , CMIOR , CMIOR } and Γ ∪ {A} ⊆ W ,
Γ ⊢L A iff Γ L A.
36
Download