Lecture 3: Stability and Action Hannes Leitgeb October 2014 LMU Munich

advertisement
Lecture 3: Stability and Action
Hannes Leitgeb
LMU Munich
October 2014
My main question is:
What do a perfectly rational agent’s beliefs and degrees of belief have to be like
in order for them to cohere with each other?
And the answer suggested in Lectures 1 and 2 was:
The Humean thesis: Belief corresponds to stably high degree of belief.
(Where this meant ‘stably high under certain conditionalizations’.)
Plan: Explain what the Humean thesis tells us about practical rationality.
Recover the Humean thesis from practical starting points.
1
Bayesian and Categorical Decision-Making
2
Subjective Assertability
3
A Note on the Preface Paradox
4
Belief and Acceptance
Again I will focus on (inferentially) perfectly rational agents only.
Bayesian and Categorical Decision-Making
In Lecture 1 we derived a compatibility result for Bayesian decision-making and
categorical decision-making.
The very simple all-or-nothing framework was this:
O is a set of outcomes.
u : O → R is an “all-or-nothing” utility function that takes precisely two
values umax > umin .
Actions A are all the functions from W to O (very tolerant!).
Use(A) = {w ∈ W | u (A(w )) = umax } (= {w ∈ W | A serves u in w }).
An action A is (practically) permissible given Bel and u iff Bel (Use(A)).
One can in fact recover the Humean thesis from the logical closure of belief
plus decision-theoretic coherence:
Theorem
The following two claims are equivalent:
Bel is closed under logic, not Bel (∅), and Bel and P are such that for all
all-or-nothing utility measures u it holds that:
(i) for all actions A, B: if Bel (Use(A)) and not Bel (Use(B ))) then
EP (u (A)) > EP (u (B )),
(ii) for all actions A: if
EP (u (A)) is maximal,
then Bel (Use(A)), and for all actions B with Bel (Use(B )) it holds that
EP (u (A)) − EP (u (B )) < (1 − r ) (umax − umin ).
Bel and P satisfy the Humean thesis HT r , and not Bel (∅).
That’s for Alexandru (thanks!):
There is also a Poss-variant of this all-or-nothing decision theory that is
compatible with Bayesian decision theory in the following sense:
Theorem
The following two claims are equivalent:
Bel is closed under logic, not Bel (∅), and Bel and P are such that for all
all-or-nothing utility measures u it holds that:
(i) for all actions A, B: if Poss(Use(A)) and not Poss(Use(B ))) then
EP (u (A)) > EP (u (B )),
(ii) for all actions A: if
EP (u (A)) is maximal,
then Poss(Use(A)), and for all actions B with Poss(Use(B )) it holds that
EP (u (A)) − EP (u (B )) < r (umax − umin ).
Bel and P satisfy the Humean thesis HT r , and not Bel (∅).
Moral: Given the logical closure of belief,
it is hard to avoid the Humean thesis even on practical grounds.
(If one does not aim to give a representation theorem, one can also, without
much loss, restrict decision contexts to a given set of action alternatives.)
Subjective Assertability
When is a proposition subjectively assertable?
When is an indicative conditional subjectively assertable?
Ramsey (1929):
If two people are arguing ‘If p will q?’ and are both in doubt as to
p, they are adding p hypothetically to their stock of knowledge and
arguing on that basis about q. . .
Subjective Assertability
When is a proposition subjectively assertable?
When is an indicative conditional subjectively assertable?
Ramsey (1929):
If two people are arguing ‘If p will q?’ and are both in doubt as to
p, they are adding p hypothetically to their stock of knowledge and
arguing on that basis about q. . . We can say that they are fixing their
degrees of belief in q given p.
Adams (1965, 1966, 1975), Edgington (1995), Bennett (2003):
– The (rational) degree of acceptability of an indicative conditional X → Y
as being given by a person’s probability measure P (at a time t) is
P (Y |X ).
Indicative conditionals do not themselves express propositions.
Let us grant that this suppositionalist thesis is right.
Adams (1965, 1966, 1975), Edgington (1995), Bennett (2003):
– The (rational) degree of acceptability of an indicative conditional X → Y
as being given by a person’s probability measure P (at a time t) is
P (Y |X ).
Indicative conditionals do not themselves express propositions.
Let us grant that this suppositionalist thesis is right.
But that still leaves open assertability simpliciter:
– When is an indicative conditional X → Y (rationally) assertable for a
person (at a time t)?
Under the condition that P (Y |X ) = 1? P (Y |X ) > r ? E (u (Acc (X → Y ))) > r ?
Or something completely different?
By the functional role of belief, the obvious answers to our questions are
X is subjectively assertable (given Bel) iff Bel (X ),
The (Categorical) Ramsey Test:
X → Y is subjectively assertable (given Bel) iff Bel (Y |X ),
where Bel satisfies (with P) the Humean thesis.
In the following, I will support this by assumptions on subjective assertability.
(I will abuse notation a bit: > = W , ⊥ = ∅, ∧ = ∩, ∨ = ∪,. . .)
!
Logical postulates on assertability for propositions:
Ass(>) (Taut)
¬Ass(⊥) (Cons)
BW!
X!
Ass(X ), X ⊆ Y
Ass(Y )
(Weak 1)
Ass(X ), Ass(Y )
Ass(X ∧ Y )
(And 1)
Ass(X)!
Y!
¬Ass(¬Y)!
Logical postulates on assertability !for indicative conditionals (viewed as pairs of
propositions):
Ass(X → X ) (Ref)
Ass(X → Y ), Y ⊆ Z
Ass(X → Z )
BX!
(Weak 2)
Ass(X → Y ), Ass(X → Z )
Ass(X → Y ∧ Z )
(And 2)
X!
Y!
Ass(X→X)!
Ass(X→Y)!
The following rule is useful for indicative conditionals the antecedents of which
are live possibilities (“and are! . . . in doubt as to p”)—and I will focus on them in
particular:
Ass(Y ), ¬Ass(¬X )
Ass(X → Y )
(Pres)
BW!
Y!
BX!
Implies:
Ass(Y )
Ass(T → Y )
Ass(Y)!
X!
¬Ass(¬X)!
Pres guarantees that there is substantial logical interaction between factual
assertability and assertability of indicatives, which seems plausible.
(Some applications of Pres will have to be read in terms of ‘even if’ or ‘still’.)
!
Ass(> → X )
Ass(X )
Ass(X → Y ), Ass(X → Z )
Ass(X ∧ Y → Z )
Ass(X → Y ), Ass(X ∧ Y → Z )
Ass(X → Z )
Ass(X → Z ), Ass(Y → Z )
Ass(X ∨ Y → Z )
BW!
(CM)
Y!
BX!
(CC)
Ass(Y)!
X!
¬Ass(¬X)!
(Or)
To these closure conditions on assertability we add one bridge principle:
Ass(X → Y ) (and ¬Ass(¬X ), P (X ) > 0)
P (Y |X ) > 12
(High Prob)
That is: If
X → Y is assertable (where X is a live possibility in terms of Ass and P),
then
the degree of acceptability of X → Y is greater than that of X → ¬Y .
In a nutshell: Assertability of indicative conditionals is probabilistically reliable.
Theorem
The following two statements are equivalent:
I. Ass and P satisfy:
Taut, Cons, Weak 1, And 1, Ref, Weak 2, And 2, Pres, CM, CC, Or
(all applied to antecedents X with ¬Ass(¬X )), and High Prob.
II. There is a (uniquely determined) proposition X , such that X is a
non-empty P-stable proposition, and:
For all propositions Z :
Ass(Z ) if and only if X ⊆ Z
(and hence, BW = X ).
For all propositions Y , such that Y ∩ X is non-empty, for all propositions Z :
Ass(Y → Z ) if and only if Y ∩ X ⊆ Z .
This coincides with our representation theorem for conditional Bel (Lecture 2!):
Bel (X ) ≈ Ass(X ), and Bel (Y |X ) ≈ Ass(X → Y ).
A simple example:
Lord Russell has been murdered. There are three suspects: the
butler, the cook and the gardener. The gardener does not seem a
likely candidate, since he was seen pruning the roses at the time of
the murder. The cook could easily have done it, but she had no
apparent motive. But the butler was known to have recently
discovered that his lordship had been taking liberties with the butler’s
wife. Moreover, he had had every opportunity to do the deed. So it
was probably the butler, but if it wasn’t the butler, then it was most
likely the cook. (Bradley 2006)
P (g ∧ ¬c ∧ ¬b) = 0.1, P (¬g ∧ c ∧ ¬b) = 0.3, P (¬g ∧ ¬c ∧ b) = 0.6.
A simple example:
Lord Russell has been murdered. There are three suspects: the
butler, the cook and the gardener. The gardener does not seem a
likely candidate, since he was seen pruning the roses at the time of
the murder. The cook could easily have done it, but she had no
apparent motive. But the butler was known to have recently
discovered that his lordship had been taking liberties with the butler’s
wife. Moreover, he had had every opportunity to do the deed. So it
was probably the butler, but if it wasn’t the butler, then it was most
likely the cook. (Bradley 2006)
P (g ∧ ¬c ∧ ¬b) = 0.1, P (¬g ∧ c ∧ ¬b) = 0.3, P (¬g ∧ ¬c ∧ b) = 0.6.
The candidates for BW that satisfy all of our postulates (given P):
{¬g ∧ ¬c ∧ b, ¬g ∧ c ∧ ¬b, g ∧ ¬c ∧ ¬b}
{¬g ∧ ¬c ∧ b, ¬g ∧ c ∧ ¬b}
{¬g ∧ ¬c ∧ b}
A simple example:
Lord Russell has been murdered. There are three suspects: the
butler, the cook and the gardener. The gardener does not seem a
likely candidate, since he was seen pruning the roses at the time of
the murder. The cook could easily have done it, but she had no
apparent motive. But the butler was known to have recently
discovered that his lordship had been taking liberties with the butler’s
wife. Moreover, he had had every opportunity to do the deed. So it
was probably the butler, but if it wasn’t the butler, then it was most
likely the cook. (Bradley 2006)
P (g ∧ ¬c ∧ ¬b) = 0.1, P (¬g ∧ c ∧ ¬b) = 0.3, P (¬g ∧ ¬c ∧ b) = 0.6.
E.g., if
BW = {¬g ∧ ¬c ∧ b, ¬g ∧ c ∧ ¬b}, this means:
Assertable: ¬g, > → ¬g, > → ¬g ∨ ¬b, ¬b → c, ¬b → ¬g, ¬b → c ∧ ¬g.
A simple example:
Lord Russell has been murdered. There are three suspects: the
butler, the cook and the gardener. The gardener does not seem a
likely candidate, since he was seen pruning the roses at the time of
the murder. The cook could easily have done it, but she had no
apparent motive. But the butler was known to have recently
discovered that his lordship had been taking liberties with the butler’s
wife. Moreover, he had had every opportunity to do the deed. So it
was probably the butler, but if it wasn’t the butler, then it was most
likely the cook. (Bradley 2006)
P (g ∧ ¬c ∧ ¬b) = 0.1, P (¬g ∧ c ∧ ¬b) = 0.3, P (¬g ∧ ¬c ∧ b) = 0.6.
And if
BW = {¬g ∧ ¬c ∧ b}, this means:
Assertable: b, > → b (and everything assertable on the previous slide).
Here is an example in reverse: Given W = {w1 , w2 , w3 }, and
Ass(w1 ∨ w2 → ¬w2 ), Ass(w2 ∨ w3 → ¬w3 ).
According to our postulates, asserting the two conditionals w1 ∨ w2 → ¬w2 and
w2 ∨ w3 → ¬w3 will express/impose the following constraint on a probability
measure P:
w3!
3!
2!
1!
w1!
w2!
P!
Moral: A conditional version of the Humean thesis turns out to be equivalent to
logical postulates on subjective assertability, a high probability bridge
principle, and Ass = Bel.
The resulting theory of belief and subjective assertability explains some of the
findings of Jackson (1979) on robustness (see also Lewis 1987):
High probability is an important ingredient in assertability. But so
is robustness. (Jackson 1979)
In using (P → Q ) you explicitly signal the robustness of (P ⊃ Q )
with respect to P . . . Of course it is not only the robustness of material
conditionals with respect to their antecedents that is important.
(Jackson 1979)
For Wolfgang: By the Humean thesis, the Ramsey test expressed in terms of
Bel already yields stability.
Remark for Richard (concerning one of his comments on Lecture 1):
Robustness is an important ingredient in assertability [. . .]
Consider “Either Oswald killed Kennedy or the Warren Commission
was incompetent.” This is highly assertable even for someone
convinced that the Warren Commission was not incompetent [. . .]
The disjunction is. . . highly assertable for them, because it would still
be probable were information to come to hand that refuted one or the
other disjunct. The disjunction is robust with respect to the negation
of either of its disjuncts taken separately–and just this may make it
pointful to assert it. (Jackson 1979)
I speak to you (or to my future self, via memory) in the
expectation that our belief systems will be much alike, but not exactly
alike [. . .] Maybe you (or I in future) know something that now seems
to me improbable. I would like to say something that will be useful
even so. [. . .] Let me say something. . . that will not need to be given
up. . . even if a certain hypothesis that I now take to be improbable
should turn out to be the case. (Lewis 1987)
Theorem
If P is a probability measure, if Bel satisfies the Humean thesis HT r , and if not
Bel (∅) (all based on a set W of possible worlds relevant in a certain
conversational context) then:
If
BW = {w1 } ∪ {w2 } ∪ . . . ∪ {wn }, that is,
then
P (BW | ¬{wi }) > r , that is,
“the disjunction is robust with respect to the negation of either of its
disjuncts” (as intended by Jackson and Lewis).
A Note on the Preface Paradox
Makinson (1965):
Main part of a book: A1 , . . . , An .
Preface of the book: not A1 ∧ . . . ∧ An .
So is it sometimes rational to hold logically incompatible beliefs?
A Note on the Preface Paradox
Makinson (1965):
Main part of a book: A1 , . . . , An .
Preface of the book: not A1 ∧ . . . ∧ An .
So is it sometimes rational to hold logically incompatible beliefs?
Compare:
A scientific lab publishing a database of a great many experimental results
A1 , . . . , An .
Do the members of the lab believe the conjunction of all of the published
data? Of course not (statistical outliers, etc.)!
Indeed, they hold: not A1 ∧ . . . ∧ An .
What, then, is asserted by the lab’s act of publishing the database?
The proposition that the vast majority of the data are correct.
Accordingly, in the case of the book: By publishing a great many declarative
sentences A1 , . . . , An jointly, one asserts the proposition X expressed by
_
^
I ⊆{1,...,n},|I |=m
i ∈I
|
{z
X
Ai
}
for a contextually determined “high enough” m ≤ n.
This is perfectly compatible with:
Bel (X ), Bel (¬(A1 ∧ . . . ∧ An )),
P (X ), P (¬(A1 ∧ . . . ∧ An )) both being high (law of large numbers!),
Bel and P satisfying the Humean thesis,
Assertion of few propositions expressing belief in these few propositions.
Of course, what gets asserted needs to be context-sensitive then—but that is
just what the Humean thesis already taught us about Bel.
Belief and Acceptance
By the Humean thesis, belief is stable under certain conditionalizations.
In Lecture 2, we found this stability to be restricted to a context.
Typically, such a context will extend over
an episode of decision-making (from decision to completion of action)
an episode of suppositional reasoning (from supposition to conclusion)
an episode of communication (from first assertion to last in a dialogue).
But what if more long-term stability is required?
In planning my day. . . I simply take it for granted that it will not rain
even though I am not certain about this. . . in my present
circumstances taking this for granted simplifies my planning in a way
that is useful, given my limited resources for reasoning. (Bratman
1999, p. 22)
The three of us need jointly to decide whether to build a house
together. We agree to base our desiderations on the assumption that
the total cost of the project will include the top of the estimated range
offered by each of the subcontractors. We facilitate our group
deliberations and decisions by agreeing on a common framework of
assumptions. We each accept these assumptions in this context. . .
even though it may well be that none of us believes these
assumptions or accepts them in other, more individualistic contexts.
(Bratman 1999, pp. 24f)
Van Fraassen (1980): accepting a scientific theory comes with a
a commitment to a research programme, and a wager that all
relevant phenomena can be accounted for without giving up that
theory (p. 88).
In each of these cases the corresponding mental state is
acceptance of proposition X (in a context):
taking X as a premise and acting upon it (in the context in question).
(cf. Levi, Frankish, Cohen,. . .)
In each of these cases the corresponding mental state is
acceptance of proposition X (in a context):
taking X as a premise and acting upon it (in the context in question).
(cf. Levi, Frankish, Cohen,. . .)
In the present theory, acceptance of X can be captured both
in all-or-nothing terms: acting upon Bel (·|X ),
or in numerical terms: acting upon P (·|X ).
Either way this is not belief in X , since acceptance does not necessarily aim at
the truth!
What they do have in common though is their role for action and their stability:
‘Bel (X |X )’ and ‘P (X |X ) = 1’ yield the stability required for long-term projects!
Even closer to belief: Accepted belief (cf. Engel 1998), that is,
acting upon Bel (·|X ) and P (·|X ),
where Bel (X ).
This is still not belief, since otherwise a “believed” proposition such as X would
need to have a degree of belief of 1.
But it is a way of getting long-term stability while aiming at the truth,
and it gives us a second explication of Locke (cf. Lecture 2):
most of the Propositions we think, reason, discourse, nay act
upon, are such, as we cannot have undoubted Knowledge of their
Truth: yet some of them border so near upon Certainty, that we make
no doubt at all about them; but assent to them firmly, and act,
according to that Assent, as resolutely, as if they were infallibly
demonstrated. . . (Locke, Book IV, Essay)
One can show further niceties, such as:
if accepting belief and updating commute—as they should—then this
entails the preservation postulate for conditional belief (up to a zero set).
(
BW"
(
(
(
(
P"
PBw"
BX"
(
(
Evidence(X"
PX"
(
(((([PBw]X(=(
(
(((((([PX]Bx"
(
"
(
(
But most importantly: If the
Humean
thesis is right, then all-or-nothing belief
(
and acceptance have another
feature in common—both are context-sensitive!
(
(
(
(This is against, e.g., Bratman!)
(
(
On the other hand, degrees
( of belief may well be context-insensitive.
And whereas categorical belief contexts are typically short-term,
acceptance contexts are typically long-term.
Conclusion:
The stability conception of belief can also be justified on various independent
grounds that are to do with practical rationality.
What you might take home from these lectures:
Hopefully, a sensible theory of belief that is based on one bridge postulate:
Belief corresponds to stably high degree of belief.
A bridge between logic and probability theory.
An case study of philosophical theory-building by mathematical means.
This was great fun (for me): thank you so much!
References:
“A Way Out of the Preface Paradox?”, Analysis 74/1 (2014), 11–15.
I used parts of the monograph on The Stability of Belief that I am writing.
Soon a draft will appear at
https://lmu-munich.academia.edu/HannesLeitgeb
Download