Discussion:

advertisement
1
Social Reliabilist Epistemology: Where Meliorative Externalism and Third Person Internalism Meet.
Gerhard Schurz (University of Duesseldorf)
Note: The appendix contains some materials which are not presented in my talk, but may be of independent interest
1. The Starting Point: Meliorative Reliabilist Epistemology
I am interested in meliorative epistemology, which attempts to improve the epistemic
practices of human societies. Among other things, meliorative epistemology should
be able to fulfill the following two tasks:
(Task 1) To demonstrate the epistemic superiority of scientific rationality over religion or other forms of purely authority authority-based models of rationality, and
(Task 2) to increase the influence of this rationality in our society.
For example, meliorative epistemology should be able to give convincing reasons
against teaching creationism in school, side by side with evolutionary theory.
When considering the results of the majority of contemporary analytic epistemology in the light of these meliorative demands, I feel disappointed. In this respect I
agree with the challenging criticism of Bishop and Trout (Epistemology and the Psychology of Human Judgement 2005). They claim that SAE  which stands short for
'Standard Analytic Epistemology'  fails to serve meliorative functions for the epistemic practice of ordinary people. In contrast, cognitive psychology has achieved
admirable successes in this respect. I do not infer from that fact, as Bishop and Trout
do, that SAE should be better replaced by cognitive psychology. But I do infer that a
drastic change in orientation of SAE is necessary in order to achieve meliorative relevance. For example, Bishop and Trout (2005, p. 59) emphasize that meliorative psy-
1
2
2
chology is mainly concerned with strategies of 2nd order reasoning, which demonstrate the reliability or superiority of certain types of prediction strategies. In contrast,
the majority of contemporary epistemologists thinks that 2nd order justifications are
either unnecessary or impossible or both. Instead they are concerning with so-called
epistemic intuitions among which they want to find those who make the bestcalibrated system of intuitive epistemology. Many epistemologists (Reid, Moore, Pollock, Chisholm, Armstrong, etc.) declare certain basic types of scientific inferences as
rational simply because the fit human's intuitions, without any attempt to demonstrate
the veritistic superiority of these inferences in comparison to non-scientific competitors such as the inference from one's religious faith, etc. But modern cognitive psychology has demonstrated again and again how unreliable and often enough even irrational human epistemic intuitions can be  from egocentric biases and overconfidence to fundamental probabilistic or logical errors. In conclusion, I fully agree with
Bishop and Trout that meliorative epistemology should definitely not base their theories on human's epistemological intuitions.
Alvin Goldman has created a new epistemological wave to which Bishop and
Trout's criticism does not apply. I regard it as unfair that Bishop and Trout mention
Goldman only in the margin. Bishop and Trout's criticism concerning reliance on intuitions does mainly apply to so-called deontological internalists, or epistemic virtue
theorists. In contrast, Goldman's epistemology (e.g. 1986, 1988, 1993) is veritistic
and cognitivistic: it does not rely on dubious intuitions but orientates epistemology
towards a clearly defined goal, namely the systematic achievement of true beliefs  in
other words, towards reliable belief-forming methods or processes. If one takes additionally into account the collective dimension of the production of knowledge with its
high amount of division of labour, then we arrive at the framework of Goldman's social reliabilist epistemology (1999) which is also the framework of this paper.
When I said that I accept the assumptions of Goldman' framework I meant "almost
all assumptions". There are two exceptions. Firstly, I see a problem in Goldman's
quantitative definition of "veritistic value"  but here is not time to speak about this
3
problem here (see appendix A1). Secondly, I do not believe, as Goldman does, that
there exists a weak notion of "knowledge" in common sense which equates
knowledge with true belief. I doubt this because in every semantic understanding of
knowledge which I can imagine, one can reasonably believe the following:
(1)
I believe P but I don't know P.
But if one would equate knowledge with true belief, such a belief would be rationally
incoherent (proof see appendix A2). In conclusion, I concentrate on Goldman's
stronger notion of knowledge as reliable or justified true belief.
Goldman's epistemology is also meliorative: in his 1999 book ("Knowledge in a
social world") Goldman asks repeatedly the question which rules of reasoning would
increase the veritistic value of the beliefs of a population of epistemic subjects. The
same meliorative aspects of epistemology have been emphasized by Shogenji (2007)
and by me in Schurz (2008). On the other hand, Goldman's epistemology is an example of a so-called externalist as opposed to an internalist epistemology. I externalist
epistemologies, justification in the sense of internalist, i.e. cognitively accessible justification looses its central role, either for epistemology, or at least for the concept of
knowledge. Since cognitively accessible justifications are an indispensable element
of meliorative epistemology, I am here on the side of internalism, though I defend a
rather a rather untypical internalism which I call "third person internalism". In the
next section I will argue that the meliorative aspect of Goldman's epistemology pulls
his position into the direction of internalism.
2. Positions between Externalism and Internalism: Where does Goldman stand?
There exists a spectrum of positions between extreme externalism and extreme internalism. In the middle of this spectrum one finds close together meliorative social externalism and third person internalism.
3
4
4
2.1 Extreme versus moderate (reliabilist) externalism.
I confine myself to reliabilist versions of externalist positions. Central to them is the
concept of a kind of causal process which produces beliefs. The reliability of such a
process is proportional to its relative truth rate, i.e. the frequency of true beliefs
among all produced beliefs. The externalist notion of knowledge reliability is expressed by a condition of the following sort:
(Rel) A true belief in P is knowledge iff it was caused by a belief-forming cognitive
process (of a certain kindwhich is reliable in a type (or class) of circumstances C.
That causal processes have to be relativized to circumstances C is clear from varous
insights of cuasality theory (e.g. concerning causal premptions etc.) Extreme and
moderate externalism can be distinguished in terms of the range of circumstances C:
(ExtExt) Extreme (reliabilist) externalism: (Rel) holds w.r.t to the actual (type of)
circumstances C.
The problem with extreme externalism is that in many cases the acutalist reliability
depends on merely accidental features of the acutual situation. For example, if I
throw a thermometer onto the floor and then measure the temperature with it, then
this belief-forming process may be actually reliable, because by luck the thermometer
did not broke, but normally this process would not result in true beliefs about the
temperature because normally the thermometer would break. Therefore Goldman
(1986, 107) suggest that the circumstances C should range over normal circumstances. This also helps to deal away with radically skeptical scenarios, e.g. Cartesian
demons  since these scenarios are not normal. On the other hand, as Goldman emphasizes in other writings (e.g. 1988; 1986, 54f), the circumstances C must at the
same time include all epistemically relevant features of the actual circumstances, be-
5
cause otherwise Gettier-counterexamples could not be excluded. To use an example
of Goldman, if I drive through a landscape with 95% barn facades and by luck I see a
real barn, then the process of seeing a barn, though normally reliable, is not reliable
in this specific circumstances. In conclusion, I suggest to express Goldman's moderate reliabilism as follows:
(ModExt) Moderate (reliabilist) externalism: (Rel) holds w.r.t to a type of circumstances which is as normal as possible but nevertheless includes all epistemically relevant features of the actual situation.
(ModExt) involves weighing of global (normal) and local (actual) aspects of reliability  a weighing which I think is not externally dictated but involves some amount of
subjectivity. Goldman's reliabilism includes also an additional fourth and purely internal condition which requires that the reliably produced true belief must not be defeated by some other belief of the epistemic agent (1986, 63). For reasons of simplicity I omit this condition in my explication of moderate externalism.
The basic criticism of externalism consist in the fact that the externalist notion of
knowledge has striped off all features of internalist justification, i.e. all kinds of cognitively accessible indicators for reliability. Thus, one may have external knowledge
even iff neither the believer nor any other person is able to recognize that the belief
was reliably produced. For the meliorative task of epistemology, accessible reliability-indicators are, of course, central. In the next subsection I turn to these meliorative
aspects.
2.2 Meliorative epistemology. I start with the following question: which conditions
must a piece of (ModExt)-knowledge satisfy in order to have meliorative function for
the epistemic practice of our society, and in particular to fulfill the two tasks (T1)
and (T2) outlined in the beginning of section 1. I suggest the following answer:
5
6
6
(MelEpist) (1.) A piece of (ModExt)-knowledge is meliorative iff the cognitive process by which it was produced carries some indicators of its reliability.
(2.) A cognitive property of a belief-producing (kind of) process is an indicator I of
reliability iff
(2.1) it is cognitively accessible to (sufficiently trained) human beings, and
(2.2) it can be convincingly demonstrated by (sufficiently trained) human beings that
I indicates
 (2.2.1) either the reliability of the process, or at least
 (2.2.2) the optimality of the process in regard to reliability.
Let me explain why the possession of meliorative effects of knowledge requires these
two conditions at hand of the following example. Imagine a say pre-modern population of humans with a subpopulation of purported information-providers (medicine
mans, priests, etc.) out of which say only 10% are truly reliable informants, which
base their information on empirical induction instead on intuition or religious faith.
reliable informants purported informants
population of epistemic subjects
As long as the members of the population cannot discriminate the informants in the
yellow from those in the red circle, the reliable information in the red circle will be of
little use for the increase of veritistic value, because it cannot spread through the society. For example, in a primitive religious tribe even a genius who can reliable heal
diseases will be unable to compete with the witch doctors because the members of the
tribe cannot discriminate reliable from non-reliable healing practices. But the ability
to discriminate between reliable and non-reliable informants requires conditions (2.1)
7
as well as (2.2). Condition (2.2) is necessary because competing kinds of beliefforming processes or world-views will suggest different reliability-indicators (for example, empirical induction versus agreement with the bible), so that in order to
"spread one's memes/informations" one has to also convince the audience about the
right reliability-indicators.
Not only the social spread of true beliefs, but also social learning of reliable cognitive processes requires that reliability can be cognitively detected and understood
by way of reliability-indicators. Without such indicators, cultural evolution would be
impossible.
Let me compare my point with Goldman-and-Olsson's solution to the value-ofknowledge problem. Goldman and Olsson (2008) argue that the veritistic surplus value of reliably produced true belief over mere true belief lies in the fact that the possession of reliably produced true beliefs increases the probability of having true beliefs of the same type in the future, because one possesses a systematic mechanism
which produces these beliefs with high truth rate. In the same sense, having reliably
produced true beliefs together with reliability-indicators increases the probability of
the society's possession of true beliefs of the same type in the future. If the first kind
of surplus value is a reason to include the condition of reliability in the definition of
knowledge, then why should the second surplus value not also be a reason to include
the condition of knowledge-indicators in the definition of knowledge? As soon as one
does that, one has shifted from an externalist to an internalist concept of knowledge.
2.3 Reliabilist Internalism  first person and third person. There is a close connection
between my melioration conditions (MelEpist) and those internalist conceptions
which understand justification in the sense of indications of reliability:
Reliability internalism: A justification of a belief is a system of arguments which indicates the reliability of the underlying belief-forming process  in the sense of conditions (2.1) and (2.2) of (MelEpist).
7
8
8
Condition (2.1) corresponds to internalist first order justification conditions: for example, such an indicator might be the fact the belief was based on, or inferred from
adequate grounds which are themselves cognitively accessible. Condition (2.2), on
the other hand, captures internalist second order justification conditions: it must be
possible, at least in principle, to demonstrate the reliability or reliabilist optimality of
the cognitive process. What distinguishes meliorative epistemology from traditional
internalism is that it is not required that the informant itself is aware of the indicator
in condition (2.1) or possess the capability required in condition (2.2). For the social
spread of knowledge it is only necessary that conditions (2.1-2) are realized by some
members of the community, for example by certain experts who evaluate the reliability of informants. In this understanding we get what I call third person internalism in
Schurz (2008) and what Shogenji (2007) has called community-internalism. On the
other hand, if the believer himself realizes the satisfaction of conditions (2.1) and
(2.2) then we get first person internalist knowledge. I summarize this as follows:
First person internalism:  the believer is aware of reliability-indicators required in
condition (2.1) of (MelEpist) and possesses the capacity required in (2.2) of (MelEpist).
Third person internalism:  and there exist some actual / possible experts of knowledge which are aware of the reliability-indicators required in condition (2.1) of
(MelEpist) and possesses the capacity required in (2.2) of (MelEpist).
The actualist-possibilist-distinction gives rise to two different version of third-person
internalism. In Shogenji's account, the expert be an actual member of the community.
This account implies a certain amount of cultural relativism (or communityrelativism), and Shogenji is well aware this problem (2007, p. 33, fn. 35). In my account I prefer the possibilist version, which merely requires that the internalist justifi-
9
cation required in conditions (2.1) and (2.2) of (MelEpist) is (naturalistically) possible. This avoids cultural relativity, on the cost that it does not guarantee actual but
merely possible meliorative effects.
I will not go deeper into the subtleties of this distinction. What I want to point out
is how close meliorative externalism and third person internalism come to each other.
The only difference between them lies in the semantic question which ingredients of
meliorative epistemology should enter the meaning of the concept of knowledge, and
which should be regarded merely as a contingent means to reach knowledge. I think
that several common-sense intuitions indicate that internalist concepts of knowledge
are well entrenched. An example is the KK-principle, i.e. the principle that a subject
can only be entitled to know something if that subject also knows that she knows it,
in other words, the subject can justify and defend her knowledge. On the other hand, I
think that in a naturalistic framework there is also a good place for a purely externalistic concept of knowledge  as long as one is aware that possession of purely externalist knowledge without reliability-indicators is not meliorative.
3. Rules of Meliorative Epistemology
I start this last section with an illustration of the difficulty of pure externalism in respect to meliorative purposes, at hand of a confrontation of the knowledge claims of
an empirical scientist with that of a religious fundamentalist.
The empirical scientists says:
The religious fundamentalist says:
Life is the result of evolution; I con-
Life has been created by an omnipotent
clude this from the empirical evi-
God; I conclude this from the fact that
dence by induction or abduction.
sometimes God seems to speak to me.
The externalist analysis:
The externalist analysis:
This is knowledge if it was caused by
This is knowledge if it was caused
evidence via a reliable cognitive mecha-
by this God in a reliable way 
9
10
10
nism  though I don't know whether this
though I don't know whether this
is the case.
is the case.
Of course, the meliorative epistemologist wants go further than that and convince the
layman that scientific induction/abduction can be justified as reliable; but beliefs in
God can in no way. For this purpose he needs concrete meliorative rules. In this final
section I want to discuss some concrete rules of meliorative epistemology together
with their presuppositions  i.e., the assumptions which one must make in order to
demonstrate these rules as reliable. The first rule is given by Goldman himself (1999,
121), and it is extremely general:
(R1) (Evidence) One should base one's beliefs on some kind of evidences.
Presupposition: (R1.1) The chosen evidence statements E themselves tend to be true
(either conditionally on further evidences, or unconditionally), and
(R1.2) they are reliable indicators in the sense of possessing objectively positive likelihoods: Prob(E|H) > Prob(E) (provided Prob(H)  0, 1).
Theorem 1 (Goldman 1999, 121): Bayesian updating of H's posterior probability
Prob(H|E) from (R1) increases the expected veritistic value, provided the presuppositions are met.
To base one's knowledge claims one some kind of evidences is better than to base
them on nothing  but this is not enough for meliorative purposes. Also religious believers support their religious faith by their religious evidences (cf. Swinburne 1979),
for example by feeling God, hearing His voice, etc. What is missing is a rule such as
(R2) which demarcates certain evidences as prima facie reliable:
(R2) (Perceptual evidence) One should base one's beliefs on one's perceptual (observable) evidence.
11
Presuppositions (of R2):
(R2.1) Perceptual experiences of the form "Person X has perceptual experience with
content P" (X(P)) are usually true and intersubjectively stable, and
(R2.2) they are reliable indicators of the realistic truth of their content, i.e.
Prob(X(P)|P) > Prob(X(P)).
Of course, the (2nd order) justification of (R2.2)  the inference from introspective
experience to external reality  is a a fundamental problem which is subject to wellknown skeptical challenges.
Rules (R1-2) are needed for empirical science to get started. The next two rules
amplify their probability-increasing effects (assuming presuppositions R2.1-2)
(R3) (Maximal specificity): It is better to base ones (hypothetical) beliefs one a more
comprehensive than on a less comprehensive set (or conjunction) of evidences or testimonies.
Goldman conjectures that this rule increases expected veritistic value, buin (1999,
145f) he says he doesn't know a theorem proving this. So I wish to report that theorems of this sort exist in the literature. One example is the decision-theoretic proof of
Good (1983, 178ff). I have proved a simple theorem of the following sort:
Theorem 2: Conditioning to more narrow reference classes may only improve but can
never decrease one's expected predictive success (Proof see appendix A3).
The second probability-amplifying rule concerns the effects of independent evidences
or testimonies (also mentioned by Goldman 1999):
(R4) (Condorcet jury theorem): Try to base your (hypothetical) beliefs on conditionally independent evidences or testimonies.
11
12
12
Theorem 3: If many conditionally independent evidences or testimonies favor the
same hypothesis in terms of likelihoods (out of a partition of possible hypotheses),
then the conditional probability of the hypothesis gets amplified and tends towards 1
if the number of evidences tends towards infinity. (Proof see appendix A4).
Several further meliorative rules could be added here, but these examples must suffice. I now turn to the most difficult problem: how can one justify the presuppositions
of the mentioned rules? For example, the reliability of certain kinds of evidences or
indicators can be justified by inductive arguments:
(R5) Demonstrate the reliability of your evidences by inductive arguments.
But how should one justify induction? At this point meliorative epistemology meets
the fundamental skeptical challenges of 2nd order justification: how can one justify
the most fundamental reasoning processes without committing the fallacy of a circle
or infinite regress? In the final part of my paper I will discuss this question at hand of
the problem of induction. Several externalists (e.g. van Cleve 1984), and also Goldman, have argued that circular justifications of cognitive practices are not vicious, but
may even be virtuous. More precisely, Goldman argues that the "rule-circular" justification of the reliability of belief-producing rule or process by using the same rule or
process can have veritistic or even meliorative value (1986, 104, fn. 21; 1999, 85).
Here I disagree with Goldman, and I illustrate my disagreement at hand of Salmon’s
famous counter-argument (1957, 46) to the circular justification of induction:
Internalist reconstruction of circular ‘justifications’ of (counter-)induction:
Inductivist:
Counterinductivist:
Past inductions have been
Past counterinductions have not
successful
been successful.
13
Therefore, by rule of induction:
Therefore, by rule of counterinduction:
Inductions will be successful in
Counterinductions will be successful in
the future.
the future.
The internalist concludes from the symmetry that both ‘justifications’ are epistemically worthless.
In contrast, for the externalist both justifications are ‘correct’ in the following sense:
The circular justification of induc-
The circular justification of counter-
tion is correct in worlds where
induction is correct in worlds where
inductive inferences are reliable.
counterinductive inferences are reliable.
The fact that a conclusion as well as the opposite conclusion can be "justified" by this
kind of argument makes rule-circular arguments melioratively worthless also for externalists, in spite of the semantic move in the externalist understanding of the notion
of "justification". What is even worse, a similar circular justification of the rule
"trust-in-God") may also be provided by the religious fundamentalist:
(Rule TG, "trust-in-God"): If you hear God's voice in your mind saying P, then infer
that P is true.
The reliability of this rule is justified as follows: I feel God's voice saying to me that
the rule (TG) is reliable, from which I infer by (TG) that it is reliable.
For the externalist this argument is correct in worlds in which (TG) is reliable.
I conclude that circular arguments of this sort do certainly not belong to the repertoire
of meliorative epistemic rules.
If that is true, how can we then defend the rule of induction, as opposed to other
weird rules of belief formation about future or non-observable events? I think that
Hume is right in that we cannot demonstrate the external success, i.e. reliability, of
induction or other ultimate cognitive methods. But we can compare competing cognitive methods from within our system of beliefs (in a quasi-Kantian sense). In particu-
13
14
14
lar we can use epistemic optimality arguments as a means of stopping the justificational regress. Epistemic optimality arguments are a game-theoretical generalization
of Reichenbach’s best alternative account. They do not show that induction must be
successful, but they show that induction is an optimal prediction method among all
methods of prediction which are available to us. Even in radically skeptical scenarios
where induction fails, induction can be optimal provided it that also all other prediction methods are doomed to failure in these skeptical scenarios.
In other papers (e.g. Schurz 2004, 2008 b,c) I have developed an optimalityapproach to the problem of induction in terms of prediction games. I regard optimality-justifications as sufficient from the viewpoint of epistemic decision making. Optimality can not be demonstrated for so-called object-inductive methods. What I have
attempted to show in is that optimality can be demonstrated for meta-inductive methods, which take all accessible object-inductive prediction methods as their input. The
following theorem of mine is based on mathematical results in non-probabilistic universal prediction theory concerning "online-predictions with expert advice":
The meta-inductivist's optimality in the prediction game:
Assumptions and terminology: A prediction game consists of a countably infinite sequence of (discrete or real-valued) events and a finite set of prediction methods (or
players) which predict at each discrete time the next event, and whose predictions are
accessible to the meta-inductivist.
A meta-inductive prediction method observes the success rates of all (accessible) prediction methods, and attempts to calculate an "optimal" prediction from the predictions of the accessible methods according to their so-far success rates.
Theorem 4 (Schurz 2008b,c): There exists a (weighted-average) meta-inductive prediction strategy whose predictive success rate is strictly optimal in the long run (converges towards the maximal predictive success rate), and whose short run loss is upper bounded by the square root of the number of competing methods divided through
the discrete time).
15
The optimality-justification of meta-induction is mathematically-analytic. It implies,
however, an posteriori justification of object-induction, i.e. of induction applied to the
level of events: for we know by experience that in our real world, non-inductive prediction strategies (such as reliance on instinct, intuition or clairvoyance) have not
been successful so far, whence it is so far meta-inductively justified to favor objectinductivistic strategies. In this way, meta-induction yields an indirect justification for
the common-sense argument that it is reasonable to perform object-induction, because so far it has turned out to be superior. This argument is no longer circular, because that meta-induction can be justified in a non-circular way.
I think that by similar epistemic optimality arguments can be applied to the justification of abduction  at least this is what one should attempt to do. in order to overcome the skeptical challenges (cf. Schurz 2008d). With this optimistic remarks for
weak foundationalism I conclude this paper.
References:
Bishop, M. A., and Trout, J.D. (2005): Epistemology and the psychology of human
judgment, Oxford University Press.
Goldman, A. (1986). Epistemology and cognition. Cambridge/Mass.: Harvard Univ.
Press.
Goldman, A. (1988): "Strong and Weak Justification", Philosophical Perspectives 2,
51-70.
Goldman, A. (1993): Philosophical Applications of Cognitive Science, Westview
Press, Boulder.
Goldman, A. (1999). Knowledge in a social world. Oxford: Oxford Univ. Press.
Goldman, A., and Olsson, E. J. (2008): "Reliabilism and the Value of Knowledge", to
appear in: Duncan Pritchard (ed.), title (?), Oxford University Press.
Good, I. J. (1983): Good Thinking. The Foundations of Probability and Its Applications, Univ. of Minnesota Press, Minneapolis.
Salmon, W. C. (1957). Should we attempt to justify induction? Philosophical Studies,
15
16
16
8, No. 3, 45-47.
Shogenji, T. (2007). Internalism and externalism in meliorative epistemology.
Online-paper http://www.ric.edu/faculty/tshogenji/workprogress.htm.
Schurz, G. (2004): "Meta-Induction and the Prediction Game", in: W. Löffler, P.
Weingartner (eds.), Knowledge and Belief, öbv & hpt, Vienna 2004, 244-255.
Schurz, G. (2008a): "Third-Person Internalism: A Critical Examination of Externalism and a Foundation-Oriented Alternative", Acta Analytica, online version
http://dx.doi.org/10.1007/s12136-008-0016-2. Printed version to appear.
Schurz, G. (2008b). Meta-Induction. A game-theoretical approach to the problem of
induction. To appear in: C. Glymour & D. Westerståhl & Wei Wang (Eds.), Proceedings from the 13th International Congress of Logic, Methodology and Philosophy. London: King's College Publications.
Schurz, G. (2008c): "The Meta-Inductivist's Winning Strategy in the Prediction
Game: A New Approach to Hume's problem", submitted to Philosophy of Science
(review status: conditional acceptance).
Schurz, G. (2008d): "When Empirical Success Implies Theoretical Reference. A
Structural Correspondence Theorem", submitted to British Journal for the Philosophy of Science (review status: conditional acceptance).
Swinburne, R. (1979). The existence of God. (Oxford: Clarendon Press, revised 2nd
ed. 2004)
Van Cleve, J. (1984). Reliability, justification, and induction. in P. A. French et al
(Eds.), Causation and causal theories (pp. 555-567). Midwest Studies in Philosophy 4.
Appendix:
Appendix A1: A Problem with Goldman's quantitative notion of "veritistic value":
Goldman defines the quantitative version of the "veritistic value" V of one's belief in
regard to an alternative "P-versus non P", in short P ("" for unnegated or negated)
as follows:
V(P) = DB(the-true-of-P-vs.-non-P).
I think that this is only reasonable if the underlying degree of belief function DB sat-
17
isfies rationality principles which are stronger than mere coherence in the sense of
fair betting quotients. For if DB has to satisfy mere coherence, then Goldman's definition would lead to the strange consequence that one can simply increase the veritistic
value of one's degree of belief concerning say random events (e.g. games of chance)
by believing all future random events which have objective probability  0.5 with
degree of belief 1. Following from well-known theorems about the so-called maximum rule for prediction (cf. Schurz 2008b), this would maximize one's expected degree of belief. But these degrees of belief would lead to irrational consequences in
decision-theoretic contexts, where utilities are involved.
To overcome this problem I think that degrees of belief should match objective
long run frequency limits (Prob) according to Reichenbach's principle of the narrowest reference class as follows:
DB(Fa) =Prob(Fx | the-strongest-relevance-reference-class-to-which-a-belongs).
Appendix A2: Why "I believe P but I don't know P" would be rationally incoherent if
knowledge means mere true belief:
I assume the following principles of rational believers:
(R1) Rational believers know that knowledge means mere belief.
(R2) Whenever rational believers believe a finite set of propositions P 1,,Pn, andf
P1,,Pn entails Q, then rational believers belief Q.
(R3) A rational believer S believes P iff S believes that P is true iff S believes that
non-P is false.
(R4) A rational believer S believes P iff S believes that she believes that P.
(R5) Rational believers never believe inconsistencies.
Assumption: (1) S believes that (S believes that P and not (S knows that P)).
(1), (R1) and (R2) entail that:
(2) S believes that (S believes that P).
(3) S believes that (not(S believes that P and P is true)).
(2) and (3) entail by (R2) (two times applied):
17
18
18
(4) S believes that (not (P is true)).
(4) entails by (R3):
(5) S believes that not P.
(2) entails by (R4) that
(6) S believes that P.
(5) and (6) entail by (R2) that:
(7) S believes that (P and not-P).
By (R5) (7) is impossible. Hence assumption (1) must be false.
Appendix A3: Proof of theorem 2:
Theorem 2, precise version:
Assumptions: The event-frequencies converge towards limiting probabilities, the
events to be predicted are Fx, i.e. Fx versus Fx, p(Fx|Rx) > 0.5 > p(Fx|Rx)
(where "p" stands for "probability"), and the events are statistically independent from
one's predictions (i.e. the sequence is random). Moreover, one predicts Fx according
to the so-called maximum rule: at each time one predicts that event out of {Fx, Fx}
which has maximal (conditional) probability. Then the expected predictive success
rate of predicting Ex conditional on Rx versus predicting Ex conditional on
RxQx increases if and only if one of p(Fx|RxQx) or p(Fx|RxQx) is smaller
than 0.5; and otherwise the expected predictive success rate remains unchanged.
Proof: If neither one of one of p(Fx|RxQx) or p(Fx|RxQx) is smaller 0.5, then
predicting on RxQx and predicting on Rx (using maximum rule) makes no difference; whence also the predictive success rate is the same. I show now that otherwise
the expected predictive success increases.
Without restricting the assumptions, assume the situation (a) p(Fx|Rxs1 > 0.5, (b)
p(Fx|RxQx) := s2  p(Fx|Rx), and (c) p(Fx|RxQx) = s3 < 0.5. Moreover (d) p(Rx)
= r, and (e) p(Qx|Rx) = q. By (a) and maximum rule, one always predicts +Fx if Rx is
realized, hence we get for the R-conditional predictive success rate "suc|R":
19
(1) suc|R = p(Fx|Rx) = s1.
Note that p(Fx|Rx) = p(Fx|RxQx)p(Qx|Rx) + p(Fx|RxQx)p(Qx|Rx). Hence
(2) s1 = qs2 + (1q)s3.
Moreover, for the RQ-conditional predictive success rate we obtain by (b), (c) and
maximum rule (if RxQx is realized one predicts Fx, and if RxQx is realized one
predicts Fx):
(3) suc|{R,Q} = qs2 + r(1q)(1s3).
By (c), (1s3) > s3, (2) and (3) entail that suc|{R,Q} > suc|R. Q.E.D.
Appendix A4, proof of theorem 3:
Theorem 3, precise version:
Assumptions: There exist conditionally independent evidences E1,,En all of which
probabilistically favour one hypothesis, say Hk, out of a partition of hypotheses
H1,.,Hm, in the precise sense that
J{1,,n}, r (1rm): Prob(jJEj|Hr) = jJProb(Ej|Hr) (cond. independence)
i (1in): Prob(Ei|Hk) > Prob(Ei|Hr) for all rk (1rm) (likelihood favors Hk)
i: (1in): Prob(Ei|Hk) > Prob(Ei) (Hk's likelihoods
then limn Prob(Hk|1inEi) = 1 (provided Prob(Hk) > 0).
Proof: Let Prob(Hk) = h >0. Let Prob(Ei|Hk) = pi. We may assume that i (1in:)
and r (1rm, rk): pi  Prob(Ei|Hr). From that we obtain (by plugging into the
likelihood formula):
Prob(Hk|1inEi)  h1inpi / (h1inpi + 1rm, rk Prob(Hr) 1in (pi)))
= h1inpi / (h1inpi + (1h) 1in (pi)))
= 1 / (1 + (1h/h) (1in (pi) / 1inpi ).
Since for all pi, pi1, and  > 0, it holds that
(1in (pi) / 1inpi)  1/ (1 n. 
Hence limn (1in (pi) / 1inpi) = 0. The claim follows. Q.E.D.
19
Download