The Value of Knowledge

advertisement
The Value of Knowledge?
It seems to me that epistemology has gone rather badly wrong, and that it has
been going badly wrong for quite some time. The error, if there is an error, stems from a
devaluing of the perspective of the epistemic agent. And, this devaluing is largely the
result of misidentifying the value of knowledge. That one ought to seek knowledge
might seem so obvious to some that it hardly needs arguing for. But I think such an
examination is overdue. What, then, is the value of knowledge?
Why it is Unimportant to Know
Knowledge has no value. Well, that’s not quite right. Knowledge ‘how’ certainly
has value. This kind of knowledge is rightly sought after, simply because we want things
done (and we want them done right). I should have said that propositional knowledge
has no value. Well, that’s not exactly right either. It would have been more correct to
say that propositional knowledge, qua knowledge, has no value. It has no value over-andabove the value of its constituents. Working from the standard account of knowledge as
justified true belief, it is clear that that which constitutes knowledge has value. True
beliefs have value. These allow us to act in ways more likely (for the most part) to
achieve our goals. People with false beliefs can attain their goals as well, but it will
probably be more difficult and more unlikely.1
1
It may even be true that, in some cases, false beliefs lead to goal satisfaction more efficiently that true
beliefs (see Klein). But such cases are rare.
1
Clearly, having justification for one’s beliefs is valuable as well. Justified beliefs2
are valuable because they are more likely true, and as was mentioned above, we want true
beliefs. Again, one might acquire true beliefs in the absence of good justification, but
acquire them will be less likely. And, given what’s at stake in many cases (finding cures
for disease, whether someone has weapons of mass destruction, etc.), the greater the
likelihood, the better. Nothing controversial so far.
In light of these two points concerning true beliefs and justification, what can be
said of the value of knowledge? It might be thought that, because justified beliefs are
valuable, and because true beliefs are valuable, that knowledge (justified true belief) is
more valuable still – in other words, that the value is additive (or even that the whole is
more valuable than the sum of its parts). This is, I think, a mistake. From the perspective
of the epistemic agent, knowledge is no more valuable than mere true belief. And asking
of an epistemic agent that she produce more than merely a justified belief is asking for
too much. It is asking for more than she can be expected to give.
What does it matter, to someone who has true beliefs, that they lack sufficient
justification for their beliefs? They have what they need to accomplish their goals
(insofar as true beliefs are helpful in such matters). One’s having true beliefs makes the
justification for them irrelevant. The true-believing Christian (let’s suppose) who
believes purely as a matter of faith lacks nothing significant as compared to the truebelieving theist who believes on the basis of a good (let’s suppose) design argument.
They both have what they want – their place in heaven. Any value had by being justified
is determined solely by raising the likelihood of having true beliefs. Justification adds
For the purposes of this paper, I’m going to use the expressions ‘having justification for one’s belief’ and
‘having a justified belief’ synonymously. While this a bit loose, I don’t think anything hangs on it, here.
2
2
nothing to true belief itself. And certainly, justification would add nothing to false beliefs.
The atheistic philosopher will find no solace in the fact that his beliefs were more
justified than those of the faith-oriented Christian. If wrong, his climate will be just as
hot. So, having a justified true belief has no value over-and-above a true belief.
But, it might be argued, the faith-believing Christian won’t know that he believes
truly. That’s correct, but what of it? This is not the claim that it is important that he
know that God exists (etc.). It is more like the KK thesis. But why is it better to know
that one has a true belief, as opposed to having a true belief that one has a true belief, and
so on.
It will be noticed here that the only value I am considering is instrumental value.
To be sure, I am ignoring the possibility that being justified (and thus, knowing) has
some non-instrumental value that adds to the instrumental value of true belief. But I see
no reason to believe that justification (and thus, knowledge) has such value. Even
traditional evidentialists3, who value justification if anyone does, place such high value
on justification because of the consequences of believing falsely. For them, it is not that
being justified is itself a good, but that it is important to fully inquire into the truth of a
claim, because even the most innocuous-seeming beliefs can have dire effects. This is
(largely) not an issue for the true believer.4
It might be suggested that justification plays a role in motivating action, such that
having good reason to believe will move one to action, whereas merely having a belief
3
Here, I am using the term in its more general sense - referring to those who think that we ought not to
believe on insufficient evidence, generally. It is sometimes used in contexts concerning primarily religious
belief.
4
I suppose that one might argue that knowledge itself has some intrinsic value, but I await the argument for
this. In any event, what would need to be shown is that knowledge has intrinsic value over and above true
belief.
3
that happens to be true might not. In this case, justified true beliefs would be better than
merely true beliefs because the former will be acted on more readily than the latter.
There are a couple of things to say here. First, what would it mean to have a belief that
one is not willing to act on (concerning those beliefs typically relevant to achieving
certain goals)? If one is not willing to act on it, ceteris paribus, that what right does it
have to be called a belief? Secondly, even if this justification possesses this feature, the
value of justification would not be restricted to good reasons. In other words, having a
justified true belief would be no better than having a true belief based on bad reasons – so
long as the good and bad reasons possessed equal motivating force. Anecdotally, people
seem motivated as least as much by bad reasons as by good.
So, if the above is correct, a justified true belief holds no advantage over a true
belief, and thus if the standard account of knowledge is correct, knowledge has no
advantage over true belief. What of my second claim that an epistemic agent ought not to
be pressed for more than simply being justified in her belief that p? Consider the
following hypothetical conversation. Suppose I ask my of my friend, a non-philosopher,
but an avid baseball fan, who won the World Series in 1991. She replies that it was the
Minnesota Twins. I go on to ask if she knows this, or merely believes it. A bit hurt that I
did not take her word for it, she points to the relevant entry in her copy of the Baseball
Almanac. She further relates some of the details of the series that, my memory jogged, I
seem to recall as well. And so on. I respond by saying that this is all very impressive,
but do you know it?
I say, “What you have just given me are your good reasons for believing that the
Twins won the series. But, in order for beliefs to count as knowledge, in addition to your
4
belief’s being justified, they must also be true. So, is it true that the Twins won the
Series in 1991?”
What sort of response could she give to my request for the truth requirement of
the JTB account of knowledge? She would no doubt be quite puzzled at the question, as
it seems a very odd one. She has already given me her reasons for believing it to be true,
and we have agreed that they have every appearance of good reasons. I could hardly
have considered it a real option for her to say, ‘No’, as that would commit her to the
Moore-like, ‘I have good reason to believe it is true, but it isn’t true’.
The point, here, is that, from the perspective of the epistemic agent (my friend, in
this case), the ‘true belief’ requirement of the JTB account seems pointless. Epistemic
agents can only be responsible for justification. And, once the issue of justification has
been settled, there is nothing more of epistemic interest for the agent.
To be sure, from a third-person perspective, it will be epistemically relevant as to
whether or not someone has a true belief, in addition their being justified.5 The reason
for this is that identifying the characteristics of the “justified, but lacking knowledge”
cases will be useful for our avoiding similar errors in the future. Of course, we are able
to identify such cases because we have additional reasons that bear on the truth of the
relevant claims. But this is of no help to our epistemic agent who does not possess these
reasons.
In light of this, the central claim I wish to make is that epistemology should be
concerned not with considering whether someone knows thus and so, but should instead
be concerned with only whether someone is justified in believing thus and so. More
5
Of course, in general, it will be relevant to the epistemic agent whether the have a true belief, but not
epistemically relevant.
5
specifically it ought to be concerned with those conditions under which one ought to form
beliefs (or at least act as though they have them). Unfortunately, much effort is spent on
resolving difficulties associated with knowledge claims. They examples are familiar
enough. There are, of course, the skeptical cases: brains in vats, evil demons, dreams,
and the like6. Central to all of these cases is the idea that the epistemic agents in question
have what look for all-the-world to be good reasons, yet they still don’t know. Raising
the possibility that someone is a brain-in-a-vat is much like asking them, in spite of their
seemingly very good reasons to believe, if their belief is true. And it is equally pointless.
All they have are their good reasons, and if they have good reasons, then they ought to
have the beliefs. There is nothing else to go on. Raising these skeptical possibilities is
epistemically unhelpful to the epistemic agent.7 When it is pointed out to such and agent
that she may be a brain in a vat, and thus does not know that she is now sitting in a chair
listening to a philosophy talk, she should admit that perhaps she does not know, and
move on with her belief system roughly intact.
There are also the Gettier cases. These cases, of course, purport to show that the
JTB account of knowledge is lacking, because there are cases when the knowledgeconferring conditions of the standard account can be met, even though intuitively one
would not have knowledge. But what worry do Gettier cases present for epistemic agents?
If it is stipulated that I have good evidence that Brown owns a Ford, then I am justified in
believing that Brown owns a Ford, and I am justified in believing that claim disjoined to
some other. That I might not know some true proposition that is the result of disjoining
6
In case anyone doubts the relevance of these, I suggest referring to Sosa’s recent APA presidential address.
I suppose one might claim that these possibilities are raised to suggest that, absent a
reason to rule them out, we ought not to form beliefs that would be defeated, were the
skeptical possibility to be true. If this is the case, then it is a ridiculous claim, but more
on this later.
7
6
my claim about car ownership with one about the weather in Spain has no bearing on this
(as if any non-philosopher has even considered such a thing).
There are other such examples, like lottery paradoxes, which become largely
unmotivated if the “but do we know” question is removed (of course, we are justified in
believing that we didn’t win, and should manage are finances accordingly). But we have
considered enough to support the general point – that epistemology would be better
served to ignore issues of knowledge. Knowledge can be ignored because it is of no
value over and above true belief, because the truth of our claims (and thus our knowing)
is beyond us as epistemic agents, and because certain prominent epistemological
“problems” become pseudo-problems once knowledge is dispatched.
I have suggested that epistemology would be better served to ignore issues of
knowledge, and focus instead solely on justification. It might be pointed out that the
above critique relies on the JTB account of knowledge, but that the JTB account is hardly
the only reasonable extant theory of knowledge. For instance, the idea that knowledge is
something like warranted assertability has found some favor in the last few decades. But
whether knowledge is in fact justified true belief or warranted assertability, or what one
can defend against all comers, or some other view, is a bit beside the point here. Which
of these, if any, is picked out by our use of the word ‘knowledge’ is largely an empirical
matter. However, this issue is not a significant one. Of the three, the warranted
assertability account is closest to being correct in that it focuses more clearly on what
really matters, warrant. Whether that view, or another, turns out to be what knowledge is
is not the pressing issue.
7
What We “Ought” to Believe
But so far I have been sloppy with terms like ‘justification’ and ‘warrant’. While
giving a full account of what justification amounts to is beyond the scope of this paper
(conveniently for me), something needs to be said about it. Earlier I suggested a
connection between being justified in believing that p and it’s being that case that one
ought to have the belief that p. Parallels between the moral ‘ought’ and the epistemic
‘ought’ have been drawn elsewhere, and I believe it worth pursuing here.
What I have in mind is that having sufficient reason will be a marker for what
may be called epistemic blamelessness in believing. It is important to discover when an
epistemic agent can be faulted for believing, for not believing, for not pursing further
evidence, and so on? We can say that an agent S is epistemically blameless with regard
to S’s belief that B, if S could not be reasonably expected to have acquired (or maintained)
B in a way other than he did. More will be said about this, but as an example one might
consider the skeptical cases examined earlier. Clearly, I want to say that the brain-in-avat epistemic agent is epistemically blameless, in that she could not be reasonably be
expected to form beliefs in any way other than she does and on other evidence than that
which she has. That she is a BIV does not bear on her epistemic blameworthiness. There
is a clear parallel with moral blameworthiness. One is not morally blameworthy if it is
not reasonable to expect the individual to act in a manner other than she does. It is my
contention that epistemology ought to focus on placing epistemic blame, not on
identifying cases of knowing.8
So, what bears on epistemic blameworthiness? One’s reasons, of course, but the
only reasons that matter are of the internalist sort. This is not to deny that there are
8
This language is not new either – it parallels Chisholm’s notion of epistemic duty.
8
causal relations that play a role in belief formulation and revision, but these can be
ignored for reasons similar to those for ignoring the truth requirement for knowledge.
They are beyond the reach of the epistemic agent. From the perspective of the epistemic
agent, it does not matter whether one has a belief (true or not) as the result of a reliable
belief-forming process or as the result of mere happenstance. The true believer-byhappenstance and the true believer-by-reliable, external-means are in relevantly identical
circumstances. They have both of true beliefs and neither can give their reasons for
believing (and thus can’t say whether those reasons are good ones). Certainly, we
shouldn’t say that the true believer-by-happenstance is deserving of more epistemic
blame than the “reliable” believer!? (I’m assuming that the “reliable” epistemic agent at
no time identified his reliable belief forming processes, and I’m also assuming that each
pursued evidence for their claim with equal vigor.) What matters for epistemic blame is
the quality of the reasons for believing that each can produce. If, in both cases, there was
no decision to believe based on the evidence, then they are equally blameworthy, even
though their beliefs are true.9 What the epistemic agent needs is the ability to rationally
decide which beliefs ought to be adopted, which ought to stay, and which ought to be
rejected. That someone, in the third person, can recognize that an individual’s beliefforming processes are reliable is of no help to the agent – unless and until the agent is
informed of this. But, then the agent has internalist reasons to accept or reject a claim.
Further, those cases which have typically given support to externalist accounts
over internalist ones, lose their persuasive power in light of the considerations concerning
knowledge mentioned earlier. Some are drawn to externalist accounts of justification
because of the relative ease in solving the skeptical problems. But, I as claimed earlier,
9
This assumes that we can, at points, decide to believe on the basis of evidence. If we can’t, then the
9
there skeptical problems are not a significant worry, because hand-wringing over whether
or not we know is a pointless exercise. Other defenders of externalism point to the
seemingly obvious fact that young children know things, though they do not appear to
have internalist reasons for their beliefs (or at least, they cannot give them, if pressed). It
should be clear what I will have to say about such cases. Whether children know or not
is beside the point. What matters is assessing epistemic blame. And, it is plausible to
suppose that epistemic blameworthiness should track moral blameworthiness in children.
Just as young children are, typically, not held fully culpable (morally) for their actions, so
should they not be held epistemically culpable for believing on weak (or no) evidence.
They do not have the intellectual machinery to assess their belief systems properly. As
adults, we are free to correct their mistakes in the hope of teaching them, just as we are in
cases of moral error.
To be sure, I am here supposing that there is some volitional component to belief
formation and revision concerning at least some of our beliefs (or perhaps concerning
that which we accept). If there is no volitional component then this paper, and
epistemology in general, are fruitless enterprises. Everyone turns out to be epistemically
blameless. Just as in ethical cases, where it is generally accepted that we only hold
agents responsible for things over which they have control, so it is in epistemic cases as
well.
So, one’s (internalist) reasons will play a role in determining whether one is
epistemically blameworthy. What are good reasons to believe? Surely, the evidentialist
is at least partially right it stating that one has good reasons to believe that p if and only if,
given one’s evidence, one’s belief that p is more likely true than not. There are, of course,
10
difficulties that arise here that have been well-gone-over in the literature. Must one grasp
the evidential relation? When has one sufficiently inquired into the matter to establish
the probability of the belief in question? Must the probability be one (certainty), and if
less than one, how much less is acceptable? And so on.
As I mentioned, these have been discussed at length, and I will not go over them
here. What I would like to defend, however, is the claim that there is more to epistemic
blameworthiness than merely one’s evidential support for one’s beliefs – equally as
important, if not more so, are pragmatic reasons.
Pragmatic Justification
Pragmatic accounts of justification do not seem to enjoy wide acceptance.
Perhaps this is because pragmatic accounts of justification are often accompanied by
pragmatic accounts of truth, and either of these accounts raise the specter of knowledge
being relativistic – something that is not intuitively plausible. However, if the above
discussion of the importance (or lack thereof) of knowledge is correct, then such
difficulties should not arise.
I wish to defend a Jamesian account of justification – one similar to that found in
The Will to Believe. Recall that there, James suggests that some options that cannot be
decided on evidential grounds, can be decided on other grounds, given that certain
conditions are met. The options must be live, forced, and momentous. Being a live
option means considering it an open question as what to believe concerning the option in
question. A forced option is one in which putting off belief is in the end equivalent to not
believing, and a momentous option is one in which the consequences of believing are of
11
some import. While James’ goal seems to be to defend religious and moral belief from
the evidentialist challenge, his account can be extended to belief in general. Even in cases
where there is what would be, in other circumstances, good evidential support for the
belief that p, it may be the case that there are pragmatic considerations that make it such
that one ought not believe that p.
This is not to deny that there is evidence that objectively bears on assessing the
probability of the truth or falsity of a claim. It is to deny that such an evidentiary relation
is sufficient to determine the appropriate conditions for belief. It does not follow from
the fact that the available evidence10 raises the probability of the truth of that p to,
say, .85 that one ought to believe that p. To this factor must be added the importance of
believing that p. By ‘the importance of believing that p’ I mean the value of the expected
consequences of believing that p, should it be the case that p. By ‘the value of the
consequences’ I mean to include both moral value and prudential value, and by
‘prudential value’ I mean the value our beliefs have in bringing about the satisfaction of
our interests.
More particularly, we can say that the more significant the consequences of
failing to believe that p, when it is true that p, the lesser the requirement for evidence
bearing on whether it is true that p. Conversely, the more significant the consequences of
believing that p, when it is false that p, the greater the requirement for continued
epistemic investigation into whether it is true that p. In other words, there is more reason
to believe that p in the absence of what would traditionally be called good evidence,
when there would be significantly negative consequences for failing to believe that p,
For simplicity’s sake, let’s suppose that there is good evidence to believe that the investigation into the
truth of ‘p’ has been fairly exhaustive.
10
12
were it true that p. But, with that same evidence, it would not be permissible to believe
that p, when there would be significantly negative consequences to believing that p, were
it false that p.
The claim here is that what counts as good evidence (good enough to believe) is
in part a contextual matter. Though contextualism has become a respectable position in
recent years, many resist its pull because it seems intuitively implausible that, one can go
from not knowing to knowing by changing one’s context, while one’s evidence remains
the same. But as I suggested earlier, the knowledge questions ought to be put aside. We
should ask whether justification is contextual. And I find this highly plausible.
Consider a case where one has willfully taken hallucinogenic drugs. Say also that
one is familiar with the experiences that will likely follow, and one is aware that the
things one experiences under the influence of these drugs will likely not reflect reality
correctly. Now, consider that such an individual has an experience such that it appears to
him that he is standing in the middle of an intersection with a transit bus bearing down on
him. He has very good evidence that there is in fact no bus bearing down on him, and
that his experience is the result of taking the hallucinogenic drugs. I take it as intuitively
obvious that he is justified in believing that there is a bus bearing down on him. That is,
it would be foolish for him to believe that the bus is a hallucinogenic construct, even
though it might be very probable that it is. It would be so foolish because the
consequences of not believing that a bus is bearing down on him are severe should he be
wrong, and these run counter to his interests. And, even if he is correct in believing that
there is no bus, the benefits are slight. He would be epistemically blameless in forming
the belief that there is a bus (perhaps causing him to leap out of the way) even though the
13
preponderance of the evidence suggests that one need not do so. Taking a similar, more
controversial case, whatever one might say about the Bush administration’s rush to war
with Iraq, I think most would agree that the fact that the point at issue was whether Iraq
had WMDs played some role in determining whether the evidence for war was sufficient.
Less evidence would be required in this case, because of the danger to our interests in
failing to believe Iraq had such weapons, if they indeed had them.11
Similarly, it would be mistake to believe in those cases where believing falsely
has significantly negative consequences. A bridge builder must take great pains in
determining whether his structure is sound, in part because of the negative consequences
of believing that it is sound, when it isn’t. A person building a bridge for her model train
set need not go to such lengths, because there is less at stake.
This analysis can be applied to our beliefs, all the way down. Can we trust our
perceptual beliefs? The answer is clearly that we must, if we are to accomplish any of
our goals. Thus, we are epistemically blameless in having perceptual beliefs, even if
there is very little evidence that they are true. Their indispensability makes this so.
It might be suggested that in these cases it is not necessary that the individual
actually believe that there is a bus bearing down on him or that Iraq has WMDs. Rather,
epistemic agent should merely act as though he does believe. So, it might be said that it
would be epistemically blameworthy for him to form the belief, but prudentially rational
for him to jump out of the way. I’m somewhat sympathetic to this view, but at best what
would follow is that belief is epistemically uninteresting. What matters, fundamentally,
is what we are prepared to do, not what we believe. Belief matters, I take it, because our
11
This is, of course, too simple, as there are other goals, like not depriving people of their lives, their tax
dollars, their country’s sovereignty, etc.
14
beliefs factor into what we do (supposedly). If belief is not necessary, then epistemology
ought not be concerned with justified belief, but merely as-if belief, or warranted “act”ability – getting to that state which will bring about the requisite action. If this is the case,
then I only need apply the blameworthiness criterion to whatever it is that motivates
action, and sidestep belief altogether. And even if it is the case that we ought not to
believe the proposition less justified by the evidence, it seems plausible to say that it is a
least permissible to fail to believe the proposition strongly supported by the evidence.
Here we have added a new category: what is permissible to believe. But this is
not that implausible. Others have drawn a parallel between moral oughts and epistemic
oughts. In deontic logic, there are two main operators: one corresponding to ‘it ought to
be the case that’, and one corresponding to, ‘it is permissible that’. It is clear that there
are many actions that fail to be morally significant. These will be actions that are
permissible (though is not the case that all permissible acts are morally insignificant).
Let’s extend the parallel further, and admit an epistemic equivalent of the permissibility
operator, and apply to those cases in which pragmatic considerations override evidential
ones.12
At this point, one may charge that the view I am defending allows that, in those
cases where the consequences of a belief are of little moment, it is permissible to form
beliefs on very little evidence. I think this may be correct. In such cases, it will be
permissible both to believe and to fail to believe. Consider inquiring as to the 50
millionth digit of pi, or whether the surface of a grilled cheese sandwich is an image of
the Virgin Mary. As far as I know, there are no significant consequences of beliefs
12
As mentioned above, the value of the consequences of believing is determined in part by the role our
beliefs play in satisfying our interests. I suppose if one has an interest in believing truthfully, then we can
say that one ought to abstain from belief in the above cases until one has satisfactory evidence.
15
concerning what that digit is or whose image in on the sandwich, nor are there significant
consequences as a result of failing to believe in these cases. Because of this, while it
does not seem right to say that one ought to believe that the grilled cheese image is of the
Virgin Mary, it also does not seem correct to say that someone who so believed is
epistemically blameworthy. The issue isn’t important enough to rise to the level where
assessing blameworthiness is required.
While the above condition concerning the consequences of belief parallels
James’ momentous requirement, there is also a time constraint on belief adoption, which
parallels his forced requirement. As a practical matter, epistemic inquiry cannot go on
forever. That is to say that even if the importance of the goal is such that the epistemic
agent ought to pursue further evidence, the hand of the epistemic agent will be forced by
a requirement to make a decision now, as opposed to several years in the future. Of
course, one can labor over the soundness of a bridge indefinitely, but in addition to
building a sound bridge, such agents also have the goal of building it in a certain time.
Such time constraints necessarily limit epistemic inquiry, and thus necessitate forming
beliefs on less than perfect evidence.
Considering the bus example mentioned earlier,
one might wish to inquire further as to whether or not there is a bus barreling down the
street, but time is not available for that inquiry. Pursuing that inquiry further results in
failing to believe, and given that time is short and the consequences of not believing, one
could not be faulted for forming the requisite belief that one is in danger, even though his
available evidence might suggest that he is not in danger. The crucial idea here is that we
simply must form beliefs on less than ideal evidence. That is, we must if we are to get
16
anything done. It cannot be helped. And this is true of our run-of-the-mill beliefs, not
just religious and moral ones.
What though should be said about James’ case concerning religious belief? Are
those who accept religious claims in the absence of evidence epistemically blameless? It
will depend on the interests one has in believing. If one’s interest lies in avoiding eternal
damnation and embracing heavenly reward, then there certainly may be significant
consequences to your believing (or not). However, given that there are alternative
religious narratives that are incompatible with each other (say, those of evangelical
Christianity, Orthodox Judaism, and Wahhabism), there may also be significant costs to
believing falsely. Thus, one would be epistemically obligated to pursue inquiry on the
matter further. Of course, there is the time constraint. Perhaps, on one’s death bed, one
would be epistemically blameless in choosing one from a list of the equally probable
options.
It might be pointed out that there is much left undone, here. I’ve said little about
marking the boundaries between when one ought not to believe, when it is permissible to
believe, and when one ought to believe. I’ve said less still about how evidential reasons
and pragmatic reasons should be weighted. And, I’ve ignored altogether cases where
there are pragmatic reasons in opposition (as in the Iraq WMDs case). Conveniently for
me, these are beyond the scope of this paper. My goal was only to suggest that pragmatic
justification is a more fruitful avenue for epistemology to follow – especially once we are
freed from the burden of seeking knowledge.
17
Download