Document 12747368

advertisement
1
16 Jan 2006
Dear Professor Schum
I was very interested to read your recent paper, “Thoughts about a science of evidence”, that Phil
Dawid circulated, and would like to offer you my thoughts in response. First though, a disclaimer:
I am not well read either in your earlier writings or in those of such important people as Wigmore,
so this could be considered an outside, even a naïve, viewpoint. I offer it in the belief that such
viewpoints sometimes offer a fresh perspective. It is more about the overall shape of what you
are setting out to do, rather than its content.
I am in sympathy with your attempt to create a unified discipline of evidence. In general I am
persuaded that there are some general features of the use of evidence that are applicable across
disciplines. In addition, however, there are specific features relating to how evidence is used in
different disciplines, that it would be unwise to neglect - to that extent I think it would be useful to
pay attention to these specific features, as they could throw light on the unifying project as well in other words, to use a comparative methodology to shed light on the core structure. There are
two basic reasons why disciplines could differ: (a) one is that the subject matter imposes distinct
constraints or offers unique opportunities, (b) the other is that there could be a tradition that has
evolved in a particular discipline, and by having a comparative perspective one could examine the
strengths and weaknesses of the differing methodologies that have evolved. This could lead to
cross-fertilisation, as I am aware that in my own disciplines (epidemiology, and also economics)
there are limitations and weaknesses where a comparative perspective would be valuable. So,
this would provide one type of answer to the question that you come to near the end of your
paper, what the benefit of a science of evidence would be - it would enable each discipline to
benefit from a fresh look at its own practices in the light of what others do. (They might not want
to do this, but it would be good for them!)
As I said, I found your paper very stimulating. My main problem with it is that you do not start by
making a clear distinction between two fundamentally distinct intellectual processes: (i) trying to
establish general relationships, as in the natural and social sciences, and (ii) trying to establish
what happened in a particular instance, as in history, a criminal law case, or medicine. It seems
clear that you are concerned with (ii), from the way the paper is introduced, and most of the
examples given throughout, but there are some occasions where you stray onto (i) and seem
rather lost, as there is no prior framework to deal with the rather different issues involved.
The analysis would be greatly strengthened by making this clear, and also touching base with the
intellectual process that you don’t deal with, namely the seeking out of generalities, in the
extreme case the establishment of general laws - even in the absence of deterministic laws one
could say that the object was to uncover regularities that could indicate the general type of
process that is generating a large number of related instances. So, be explicit that you are
dealing with, say, history rather than social science, or medicine rather than biomedical science.
The case of law is more problematic, as there is no clear “generalising” equivalent of the
individual case, only the use of commonsense bolstered by expert opinion (appeal to authority, as
in the European “Dark Ages”) - the response that this function is provided by the body of law
misses the point, as in each jurisdiction there is a different body of law, with its own rules of
evidence etc, so there is a degree of arbitrariness about the situation - although one could argue
that jurisprudence partially fills this gap, to the extent that it recognises this arbitrariness.
This leads to one area where I disagree with what you say. Your statement,
“All conclusions reached from evidence are necessarily probabilistic for five
reasons. Our evidence is never complete, is usually inconclusive, frequently
2
ambiguous, commonly dissonant to some degree, and comes to us from sources
having any gradation of credibility shy of perfection.“
would be hard to justify about the many scientific statements that have attained the level of
certainty. The shape of the earth would be an obvious one - admittedly trivial nowadays, but that
is because it is now certain and therefore no longer interesting scientifically. In my own field of
biomedical science, a similarly ancient one would be the mode of the circulation of the blood. A
more interesting one is that communication between nerve cells is not electrical but chemical - as
we all know nowadays because we can buy products fortified by neurotransmitters - but this was
a controversy not many decades ago. There are countless other example: the source, chemical
structure and physiological effects of insulin would be another. So in relation to general
statements, science is capable of generating certainties - over a period of time and a process of
rigorous testing - potential falsification if you accept the Popperian view. It is only when the focus
is on specific cases that your statement quoted above is justifiable.
It also leads to a problem in your discussion of philosophy of science, for example the views of
Poincaré, which could be confusing because the writers cited were (I think) mainly discussing the
process of scientific discovery in the sense of uncovering generalities, rather than dealing with
specific instances (the same is true in relation to Darwin and your reference 91 on page 33 - and
possibly your quote from Bentham on page 67) - although a close reading of your text does make
it clear that your interest here is in a “science” of evidence, i.e. you are not concerned with
science as your object of study, but rather that your study of the use of evidence can be given the
status of a science - so, not ontology but epistemology. This means that the generalising is in
your account of how evidence can be validly used to reach a conclusion, whereas the type of
conclusion you are interested in is the single event. It might help if this were spelt out more
clearly.
When we get to your actual account of evidence, the breakdown you give of “credibility” is
suitable for the specific-instance case, but not for the generalising one - in other words, not for
generating generalisations, where repeatability (for example) would figure prominently. Also,
reliability only enters into your description in relation to “the process used in producing the
tangible item”, not to the observation itself - and this reliability of observations is central to any
generalising discipline, including science. In my (inevitably limited) experience, scientists regard
replication of findings as being one of the keystones in relation to credibility. This affects the
analysis that you present: your account of the rows of figure 1 is only applicable to the purpose of
studying a specific instance, not to a generalising purpose, although there could well be
equivalent but different things to say in the generalising case. Moreover, your later remarks on
competence and credibility (page 42) would look very different in the generalising case, not only
due to the possibility of repeat observations, but also (in science and other academic disciplines)
the fact that the observations are usually made by people who have a particular kind of expertise
- while this is not problem-free, it is different from relying on the testimonial of e.g. bystanders to a
crime. Similar remarks apply to the question of inferential force (page 48), and to evidential
completeness (page 54).
I think that your focus on evidence charts, following Wigmore, is good. Causal diagrams are also
useful, and used, in the generalising context as well (actually there are many different versions in
the latter case), and I think this is what you illustrate as figure 2B. It is worth noticing that the
arrows mean different things in the two cases: in a “generalising” diagram, an arrow indicates
causation “in the real world out there” from one item to the next (or, as you say, relevance
relations or probabilistic dependencies), whereas in a specific-event diagram, an arrow indicates
something being evidence for something - the first case is ontology, the second is epistemology.
Also, in the generalising context, an arrow is necessarily in the same direction as the passage of
time - the “caused” item follows the “causal” one - with special considerations where expectations
are involved, as is common in economics. Whereas in inference about the individual event this is
not necessarily the case, as the chain is a chain of reasoning and (I think) has no clear time
3
order. It seems to me that this is the same distinction as the issue you discuss of zero probability
in the ontological and the inferential (epistemological) case, as in “ordinary probabilities” versus
Shafer’s “evidential support” on page 52. Another way of looking at it is that it is the difference
between causal diagrams and mental maps.
I would also like to argue against the use of the word “science” to characterise the discipline of
evidence - and not only because it tends to antagonise some of our colleagues in non-scientific
disciplines. You repeatedly draw on Carnap’s threefold characterisation of science - but many
would argue that understanding or explanation is an important component of a science; possibly
it is the main aim, to which these three components contribute. (I will not attempt a philosophical
definition of these terms.) In biology, one could speak of the aim of uncovering the inner
mechanism that underlies an observed process, as in the neurotransmitter example above:
observations of neuron-to-neuron transmission are explained by finding various items of evidence
(the time delay between the neurons’ firing, seeing vesicles of fluid on electron microscopy,
chemical isolation of the transmitter, its physiological effects when injected, chemical isolation of
enzymes to synthesise and remove the transmitter, etc) - in other words, explanation is
description at a deeper level. I don’t think that the discipline of evidence is “a science” in this
sense, as there is no ontological process that we are trying to explain or understand. Rather, it is
uncovering the “rules of the game” - this may be just as important, but it is different. It has
methodology (epistemology) rather than (ontological) subject matter - as you say at one point:
“The testing here seems to be logical rather than empirical in nature”.
Finally, a minor point (that is well outside my area of competence): your mention of the work in
Oxford in and around the thirteenth century could be linked with your comment on the neglect of
learning outside Europe during this period and before (page 9) - I understand that the early work
in Oxford drew heavily on texts that were being translated from Arabic at that time, in places like
Toledo, so that there is a direct link with the glorious Islamic civilisation of Iberia that was by then
beginning to decline. The same is true of Paris. These translations included the European
ancients (Euclid, Archimedes, Aristotle, etc), but also gave Europe access to learning that was
originally Islamic, Jewish, Indian, etc - how much longer would it have taken to develop probability
theory without importing the Hindu concept of zero? Would it have happened at all?
Best wishes
Mike Joffe
4
A Reply to Michael Joffe's Comments on:
Thoughts about a Science of Evidence
D. Schum, 6 February, 2006
I was so pleased to receive Michael Joffe's helpful and detailed comments on my first
attempt to state my views about a science of evidence 1. Some of Michael's comments are quite
critical and I will try to answer them as best I can. As I noted at the outset of my paper, I expected
that many things I was prepared to say about evidence would excite critical comment. One
reason is that the evidence-based reasoning tasks we all perform in the work we do, as well as in
our daily lives, have different characteristics, only some of which may be common across
disciplines and contexts. Though I have examined thoughts about evidence in a variety of
disciplines and contexts, I have never claimed any expertise in all but a very few of them. I once
argued that the house of evidence has many mansions and that I have been just a visitor in some
of them2. Michael's fields of interest are in epidemiology and economics, two mansions I have
only visited very briefly. My replies to follow mainly concern Michael's critical comments. My hope
is that you will see that I have taken the same degree of care in answering Michael's comments
that he did in reading my thoughts about a science of evidence. The page numbers below refer
to pages in my original paper3.
1) Emphasizing Differences. Michael begins by noting that we should focus on differences
across disciplines/contexts as far as the study of evidence is concerned. Any focus just on
similarities or commonalities across disciplines would not be as helpful. On pages 67 - 68 I noted
that William Twining received the same comment from a person who agued that his emphasis on
similarities across law and history in studies of evidence is not enough; we should pay attention to
disciplinary differences in the study of evidence as well. On page 67 I noted the historian Marc
Bloch's comment that persons in any science should attempt to see the connections between
their own methods of investigation with "all simultaneous tendencies in other fields". Though he
does not say so explicitly, study of such connections would presumably involve the observation of
methodological differences as well as similarities. Michael's point is well taken here since the
"integrated science of evidence" advertised in our UCL work should not be interpreted as a focus
just on similarities across disciplines in studies of the properties, uses and discovery of evidence.
We will all learn more about evidence and inference when differences in our various approaches
to evidence-based reasoning are studied carefully.
2) Two Distinct Intellectual Processes.
Michael says that I should have started off by emphasizing the distinction between two
fundamentally different intellectual processes: (i) Those involved in the establishment of general
relationships, as in the natural and social sciences, and (ii) Those involved in establishing what
happened in a particular instance as in law, history and medicine. He says that most of my
examples involve (ii) and that I seem rather lost on the occasions when I touch upon (i), since I
provide no prior general framework for discussions of (i) that do indeed involve different issues.
If I ever do a revised version of this paper, I would start out in the same way I started in
my present paper by commenting on the emergence and mutation of the concepts of evidence
and science. But I would then consider following Michael's advice in dwelling upon distinctions
between situations (i) and (ii) in Section 4.0 when I begin to comment on elements of a science of
1 As given me in an e-mail message from Michael on January 16, 2006.
2 Schum, D., Evidential Foundations of Probabilistic Reasoning. John Wiley & Sons. New York, NY., 1994;
Northwestern University Press, Evanston, IL., 2001, paperback, xiv
3 Schum, D. Thoughts about a Science of Evidence. UCL Studies of Evidence Science. 29 December, 2005
5
evidence. This seems to be the place for discussion of these two situations and how they affect
my subsequent comments on evidence.
The trouble I would encounter in implementing Michael's suggestion about the necessity
for a "prior general framework" for discussing the "establishment of general relations" is that
many such frameworks have been proposed and debated over the centuries; they are still being
debated. A good place to start a review of these alternative frameworks is in the work of David
Oldroyd on the "arch of knowledge" that I cited on page 18. Another work that addresses these
same matters is the more recent book by Peter Achinstein that I mentioned on page 19. In these
works we find reference to the thoughts of many learned persons, such as Galileo, Francis
Bacon, Robert Hooke, Isaac Newton, John Herschel, William Whewell, Wesley Salmon, Bas van
Fraassen and many others, on the intellectual routes to be taken from observations to lawful
relations to general theories and to the testing of these theories. The routes suggested by all of
these persons are quite different. So I have doubts about what I should take as the "prior general
framework" for the establishment of any general relations.
I think I understand what are the basic distinctions between situations (i) and (ii) that
Michael has identified. In the natural, behavioral and social sciences Michael mentions, there are
many efforts to find regularities or invariances in those parts of nature that are of interest to us.
Here I quote from a very recent paper by the physicist Lee Smolen on methods of science that
Smolen says are not always employed by some researchers in physics 4:
Science works because it is based on methods that allow well-trained
people of good faith, who initially disagree, to come to consensus about
what can be rationally deduced from publicly available evidence. One of
the most fundamental principles of science has been that we only consider
as possibly true those theories that are vulnerable to being shown false
by doable experiments. [Italics mine]
I have highlighted the expressions publicly available evidence and doable experiments
for the reason that by experiments we put questions to nature to see how she will answer them.
Answers we think we have found must be made available publicly for the scrutiny of others. The
trouble is that nature will not answer any old question. She will rarely, if ever, answer general
questions. For example, if we ask her: How does the human eye work?, we will get no answer.
But if we ask such specific questions as: What is the minimum amount of radiant energy, at a
fixed wavelength, that the normal human eye can reliably detect?, we may begin to obtain
answers. But we will not of course shine only one person's eye with a single flash of light and
report what happened. We will put the eyes of many persons to tests involving many trials and
many light flashes under a variety of conditions such as: Where on the retinal surface are we
directing the light flash? And, how long have the persons' eyes been allowed to become darkadapted before the tests begin?
The point here is that the doable experiments Smolen mentions assume replicable or
repeatable phenomena. There are several reasons why we repeat experimental trials over and
over again. In the first place, nature will not always respond in exactly the same way to a set of
conditions we believe are identical. In short, there will usually be some natural variability in the
processes we are studying. In addition, the devices and procedures we are using to collect our
observations are further sources of variability. For such reasons we commonly employ statistical
indices of various sorts to grade the reliability and accuracy of our observations. But statistical
indices are always numerical indications of what has happened in the sample of observations we
have taken. The inferential role of statistics involves assessments of the extent to which results
we obtain in our sample generalize or apply to the population from which we believe our sample
has come. One trouble is that we usually have a choice from among several descriptive and
4 Smolen, L. A Crisis In Fundamental Physics. Update: New York Academy of Sciences Magazine,
January/February, 2006, 10 - 14.
6
inferential statistical indices we could employ. What we try to do is to employ statistics that are
minimally misleading. I will come later to "statistical evidence" in comments about my
classification of evidence.
So, one basic element of Michael's (i) concerning the discovery of generalizations,
regularities or invariances in nature involves experiments that are repeated observations of the
process of interest. Here are three examples I have drawn from entirely different areas: one from
particle physics, one from cell biology, and one from experimental psychology. One of the major
things these examples will illustrate is that the instruments for gathering evidence that nature
provides are quite different in various areas of science. It is quite natural that we must employ
different methods depending on what we are looking for. I have read several accounts in which
areas of science are distinguished just in terms of the instruments employed to collect relevant
observations. Persons in each one of these three areas might have the same inferential
objectives as mentioned above by Lee Smolen. But the evidence in all of these three situations
seems to have a common characteristic, namely it is tangible in nature and open to the inspection
of persons interested in conclusions that are drawn from it. .
1) Particle Physics. Physicists have developed and used a variety of devices such as
cloud chambers, bubble chambers, spark chambers and newer computer-assisted electronic
detectors to observe and study the nature and movement of atomic particles. Such devices allow
the recording of tracks taken by these particles. Analysis of these tracks provide evidence relating
to such matters as the mass and charge of the particles under study. In the use of earlier
methods of tracking particles [e.g. cloud chambers], photographs of these particle tracks were
taken and analyzed. This very laborious process is now taken over by computers that can
analyze thousands in track images in minutes. Such devices provide visible evidence regarding
the nature and movement of the particles under investigation.
2) Cell Biology. For various reasons concerning other interests I have, I have taken quite
an interest in structures called microtubules that are elements of the cytostructure of all
eukaryotic cells, including the neurons in our brains. Patterns of synaptic connections among
neurons, and their apparent "all or none" firing characteristics, have led many researchers to the
view that our brains are digital computers and that the individual neurons are simply switches. But
this view has been challenged by the mathematician Roger Penrose and the anesthesiologist
Stuart Hammeroff5. They argue that this "digital view" results from our examining the brain and its
neurons at the wrong level. We have to examine the substructures of neurons, especially their
microtubules, and when we do this we will see that each neuron is itself a sophisticated
computer, more like a chip than a simple switch.
Thanks to the imaging power of today's microscopes, we can observe the structure of
microtubules directly and repeatedly. Photographs of them appear in several references 6. On
many accounts microtubules are simply part of the cytoskeleton of cells. But they are also known
to exhibit quite rapid alterations as a result of the polymerization and depolymerization of the
tubulin molecules of which they are composed. There is evidence that this helps account for the
plasticity of brain in which new synaptic connections are forming all the time as we learn new
things and have new experiences.
The very orderly arrangement of tubulin molecules in a microtubule has suggested to
Penrose and Hameroff that each may play the role of a cellular automaton that can respond to
activities in neighboring microtubules. They offer the view that the computation involved may be
quantal rather than digital in nature. So, here is another example of a phenomenon that can be
5 E. g. Penrose, R., Shadows of the Mind: A Search for the Missing Science of Consciousness. Oxford
University Press, Oxford, 1994, 357 - 392; Penrose, R. The Large, the Small and the Human Mind.
Cambridge University Press, Cambridge, 1997, 127 - 143.
6 Ibid, Penrose, 1994, 364; Alberts, B., Bray, D., Lewis, J., Raff, M., Roberts, K., Watson, J. Molecular
Biology of the Cell. 3rd ed. Garland Publishing, Inc. New York, NY., 1994, pages 803, Fig 16-21.
7
observed directly, in this case by means of a microscope. What is at issue is what observations of
this phenomenon mean. Not every researcher has drawn the same conclusions about the role of
microtubules that Penrose and Hammeroff have done.
3) Experimental Psychology. I come now to an example of empirical research in an area
in which Nigel Harvey and I have had great interest. The area is now called behavioral decision
theory and it includes studies of the inferential and choice behavior of people. My graduate
training was in experimental psychology and mathematics. Nigel and I certainly share a common
experience, namely being exposed to many courses on empirical methods that have been found
useful in the behavioral sciences. During the first ten years or so in my career, my research was
empirical in nature and involved studies of the capabilities and limitations of persons in their
ability to assess the inferential force of evidence. I studied these aspects of our behavior in a
variety of situations, some involving entirely abstract tasks and others that involved specific
substantive situations7.
All of my studies involved repeated trials under as carefully controlled situations as I
could manage. What I observed, and recorded, from human subjects were numerical
assessments of the probabilistic judgments they made in response to patterns of evidence with
which they were provided. Such research is an example of what used to be called a "back box"
approach to the study of mental activities. In research on human probabilistic judgments the idea
was that we cannot get inside a person's head to see how he/she is evaluating evidence. All we
can do is to put something into a person's head and observe how this person will respond to it.
Analyses of what went in and what come out were, and still are, thought by many psychologists to
provide insights into the activities going on inside the brains [the "back boxes"] of the persons
whose inferential behavior is being studied. There are many questionable assumptions in such
analyses that are commonly recognized.
Though I provided statistical assessments of the behavior of samples of individuals I
studied, I was also careful to provide analyses of judgments provided by individuals. As we know,
it so often happens that a statistical account of members of a group actually describe the
behavior of no person in the group of persons being studied. In any case, as in Examples 1 and 2
above, I provided tangible evidence that could be examined by others..
I just gave three different examples of instances in which researchers are attempting to
identify general properties and relations of the phenomena of interest to them. All of these efforts
involve repeated observations of physical, behavioral or social phenomena. As I noted, variability
is inherent in these observations; this is one reason why they are repeated. Now contrast these
situations with those Michael mentioned that are associated with his (ii) above, such as in law or
history. We cannot play the world over again 1000 times in order to see on how many of these
occasions O. J. Simpson murdered his wife Nicole Brown Simpson, or the number of occasions
on which Mary Queen of Scots was witting of, or participated in, the murder of her husband Lord
Darnley. In many situations, such as intelligence analysis, we try to predict the occurrence of
future events such as: will terrorists launch an attack on the large crowd expected for the football
match at Wembley Stadium this coming Friday? If this event happens, and we hope it does not, it
will happen exactly once and under a unique set of conditions.
In all of these type (ii) situations the events of concern are unique, singular or one-of-akind. If they happened in the past, they did so on just one occasion. If they happen in future they
will do so on just one unique occasion. Though it does happen that in some of these situations we
can use statistical information of various kinds. In law, for example, we can use statistical
information concerning various properties of trace evidence, such as DNA samples, glass shards,
7 E. g. Schum, D. Inferences on the Basis of Conditionally Independent Data. Journal of Experimental
Psychology, Vol. 72, No.3, 1966, 401 - 409; Schum, D., Martin, A. Formal and Empirical Research on
Cascaded Inference in Jurisprudence. Law and Society Review, Vol. 17, 1982, 105 - 151.
8
and cat hairs. But most of the evidence we obtain comes in the form of observations or reports of
events that are also unique, singular or one-of-kind.
Michael notes that virtually all of my examples were of type (ii) situations. He's right of
course. I asked myself why I did not include more of type (i) examples; I certainly would not have
had any trouble doing so. I guess there are two reasons. The first is that I tended to give
examples of situations I have studied most extensively. They have typically involved situations in
which we must deal with most or all of the forms and combinations of evidence I mentioned in my
classification of evidence in Figure 1 on page 24. Empirical studies of the sorts I have mentioned
in my three examples above rest primarily on various kinds of tangible evidence I described
above. It is true, of course, that researchers in Michael's type (i) situations do of course rely on
the testimony of colleagues concerning their reactions to tangible results being discussed.
The second reason involves my expectations about the persons most likely to read my
thoughts about a science of evidence. My experience has been that I have received many
comments on book and papers I have written about evidence from persons whose inferential
work falls in Michael's category (ii). But I have received very few comments from persons in areas
of science whose work falls in his category (i). I know that William Twining has had this same
experience. I wonder how many persons in areas of science interested in finding
generalizations, law, or theories would take Mr. Grodman seriously that a science of evidence is
"the science of science" [see page 6]. Wigmore certainly did but his interests were in the field of
law in which every case has unique elements.
Finally, Michael says that I seem "lost" in discussing matters concerning his (i)) since I
gave no prior framework for discussing the issues bearing on these kinds of evidential reasoning.
I may have seemed lost, but I did not feel lost while writing my thoughts on evidence. In any
further version I may offer of a science of evidence I will certainly include a discussion similar to
the one I have just given above for the evidential characteristics of the sciences he mentions. But
Michael should then advise me about which one of the many "prior frameworks" I should adopt in
my discussion. I will return to Michael's important distinction between situations (I) and (ii) in other
comments I have about forms and combinations of evidence.
3) Probabilistic Conclusions
Michael has taken me to task in my discussion of the evidential reasons for probabilistic
conclusions for not acknowledging that many conclusions in science have now "attained the level
of certainty", as he puts it. He gives some examples in various areas of science that seem to fall
in my evidence category called "accepted facts" [Figure 1, page 6]. There are of course unique
events in other inferential contexts that have risen to the level of certainty. One example would be
that the twin towers of the World Trade Center in New York City were leveled in a terrorist action
on September 11, 2001. I admit that my statement [page 17] needs a bit of qualification along the
lines suggested by Michael's comments. I might better have introduced the evidential grounds for
the probabilistic nature of conclusions by saying something like: "Except for certain instances in
science and in other situations that can be identified, conclusions reached from evidence are
commonly probabilistic for the following reasons:…".
However, at the same time I insert this hedge, I note two things. I wonder whether there
are any instances in science or elsewhere when our evidence is utterly complete. I also note that
what is regarded as "fact" today may not be so regarded in the future. I have always taken an
interest in what people through the ages have thought was the site of our mental functions. For
example, at various times these functions were thought to be centered in the heart. Until the
1600s the brain was regarded as a "nondescript mass of flesh glued inside the skull"8. It was
8 Zimmer, C. Soul Made Flesh: The Discovery of the Brain - And How it Changed the World. Free Press,
New York, NY., 2004, 175
9
Thomas Willis [1621 - 1675] who first studied it as an independent organ and made so many
discoveries of its properties. This is why he is now called the father of neurology.
4) Poincare´ on Classification in Science
Michael says that my reliance on Poincare´s thoughts about the importance of
classification in science could be confusing since he was probably making reference to situation
(i) and not to situations (ii). Recall that Michael's situation (i) refers to the generation and
empirical testing of theories, laws and generalizations in various areas of the natural and social
sciences. Situation (ii) refers to evidence-based reasoning about events in specific cases that are
of interest in such areas as law, history and so on. First, I would agree with Michael that
Poincare´ was almost certainly thinking about situation (i) in his remarks. In fact, I can strengthen
Michael's argument by noting other things that Poincare´ said. Speaking of theories in science, he
said9:
At first blush it seems to us that the theories last only a day and that
ruins upon ruins accumulate. Today the theories are born, tomorrow
they are the fashion, the day after tomorrow they are classic, the fourth
day they are superannuated, and the fifth day they are forgotten. But if
we look more closely, we see that what thus succumb are the theories,
properly so called, those which pretend to teach us what things are. But
there is in them something that usually survives. If one of them has taught
us a true relation, this relation is definitively acquired, and it will be found
again under a new disguise in the other theories which will successively
come to reign in place of the old.
All this acknowledged about Poincare´s apparent emphasis on situation (i), I fail to see
what he said about the importance of being able to classify things is any less important, or even
unimportant, in situation (ii). I went to considerable lengths in my paper to show the "hidden
kinship", that Poincare´ mentions regarding facts which "appearances separate" [page 23],
between tangible evidence in the diverse fields of theatre iconography and law, two areas that are
definitely in situation (ii). They are united by means of the common credibility issues they both
raise. I would have had no difficulty at all in finding similar "hidden kinships" among items of
testimonial evidence in many substantively different areas in which we have only situation (ii)
objectives.
One final comment here concerns Michael's related comment that my interests involve
study of a "science of evidence" rather than the study of science itself. I'll first reply that, as I said
on page 66, I have taken no position on Mr. Grodman's assertion [page 6] that the "science of
evidence" is also the "science of science". I gave reasons for my hesitation in agreeing with
Grodman's assertion. At the same time, in relating the terms "evidence" and "science" I found it
necessary to say how these two terms are used and how they have changed over the ages,
which I did in my section 3.0. I also offered a variety of comments throughout my paper on the
extent to which the study of evidence has characteristics that would allow us to refer to such
study as being a science. I will have more to add on this matter in my reply to Michael's next
comments.
5) On My Classification of Evidence and Its Basis
9 Poincare´, J. H. The Value of Science [1905]. In: Gould, S. J. [ed] Henri Poincare´, The Value of Science:
Essential Writings of Henri Poincare´. The Modern Library, New York, NY., 2001, 348 - 349.
10
Michael then comments on what I said about credibility, how I described the rows in my
classification scheme in Figure 1, and what I said about inferential force and completeness. He
begins by saying that my account of credibility is suitable only for situation (ii) and not for situation
(i). He says further that my account of the rows in Figure 1 is applicable only to (ii) and not to (i).
Here I take issue with Michael. Mark you, I might not have stated matters as completely as I
should have, but what I did say I believe to be correct that Table 1 accounts for evidence we have
in both situations (i) and (ii). First, I began by describing the tangible evidence that is so common
in Michael's situation (i); it is also common in situation (ii). Though I did not specifically mention
photo or computer images of the particle tracks and microtubules mentioned in my examples
above in discussing situation (i), they form tangible evidence that we can all examine. In the same
way, in situation (ii) we can examine Peg Katritzky's photos of mountebank drawings and photos
of Bullet III in records of the Sacco and Vanzetti trial [page 31] to see what they reveal. .
I pause here to consider what tangible evidence provides in either situation (i) or (ii). Both
Michael and I agree that the scientists with (i) objectives must make their results and analyses
publicly available. In offering tangible evidence the scientist essentially says: "I will do my best to
make you privy to the same things I have observed so that you can draw your own conclusions
about whether my explanation of them is correct". The same statement can be made in situation
(ii) for the mountebank photos and Bullet III. Now, in my Example 3 above, concerning
experimental psychology in situation (i), I could not show you any photos, other images or objects
concerning what went on inside my subjects' heads as they made their probabilistic judgments for
the reason that I had no such objects or images myself. But I did show interested persons
tangible records of the actual numerical responses my experimental subjects were asked to
provide. So, I could have made the same statement that I just mentioned above: "Here is what I
observed, see for yourself whether my conclusions about them were correct".
As far as credibility issues are concerned regarding tangible evidence, I believe the same
questions are asked in situations (i) and (ii) that I mentioned on page 25. Regarding authenticity
questions in situation (i), in most cases [happily] scientists will not believe their colleagues to be
publicly propagating inauthentic tangible evidence. I regard the photos I have seen of particle
tracks and microtubules to be authentic and not contrived. None of the tangible accounts of my
subjects' probabilistic responses were ever questioned as far as their authenticity was concerned.
You have my word that I did not make up the data I reported. Unfortunately, every now and then
there are cases [frequently well-publicized] in which scientific data are questioned on authenticity
grounds. A current example involves the stem cell research reported by a Korean scientist. In
situation (ii) I have never even thought about questioning the authenticity of the mountebank
drawings Katritzky has shown us. But there certainly has been much interest in the authenticity of
Bullet III in the Sacco and Vanzetti trial and whether it was one that came from the body of the
slain payroll guard.
As far as reliability is concerned, this credibility attribute is certainly evident in situation (i).
What replication allows is grading the extent to which we get the same tangible results over and
over again. I mentioned above the two major sources of variability that accompany investigations
in situation (i). This is one reason why we use statistical indices to grade the extent of this
variability. I agree with Michael when he says that replicability is one of the keystones of
credibility in situation (i).
Concerning accuracy, I think such concern is also evident in situation (i). There are
continuing efforts to increase the accuracy with which we can observe natural phenomena. The
microtubules I mentioned above in my second example in situation (i) provide a good example. In
the past, the stains used in the preparation of neurons for their microscopic examination
obliterated the internal molecular structure of microtubules. Now, different stains are used and we
can get a closer look at the orderly arrangement of the tubulin molecules in microtubles and the
changes wrought in them as a result of polymerization and depolymerization. Another example of
course involves the drastically increased resolution provided by the Hubble space telescope in
making cosmological observations.
11
Regarding the other rows in my evidence classification in Figure 1, it may not be so
important for scientists in situation (i) to be concerned about the various species of testimonial
evidence I described in Rows 2 and 3. However, Michael makes the point that the people making
statements about what they have observed in situation (i) are "experts" and not ordinary people,
such as bystanders, who report on what they have observed. Here I should have mentioned that
there are rules in our legal system, such as our FREs 702 - 705, that concern opinion evidence
given by experts in scientific, technological or other areas of specialized knowledge. Among other
things, it must be demonstrated that an expert witness is indeed qualified to provide information in
these areas.
Just recently I received some very valuable comments from a Dutch Appeals Court
Judge, John van Voorhout. He argues that, in his experience, all expert opinion evidence is
equivocal in nature. I had to agree with him and mentioned that it could be argued that all opinion
evidence, given by experts or anyone else is, or should be, stated equivocally. The argument
goes something like this. Here is a person [expert or otherwise] who asserts that he has evidence
about events A, B, and C, that allowed him to make the inference, or form the opinion, that event
D also occurred, as he/she reported unequivocally in testimony D*. We are entitled to ask: "Give
us reasons why you believe that the occurrence of events A, B, and C have made the occurrence
of event D necessary or certain to have occurred?" What I should do in my Figure 1 is to say that
equivocal testimony also has the same three grounds or bases and unequivocal testimony: direct
observation, at second hand, or opinion based on other evidence.
Finally, I notice that Michael did not inform me about any form of evidence that my Figure
1 does not include. What is certainly true, and that I have recognized, is that persons in situations
(i) and (ii) will make use of different mixtures of the forms of evidence I have listed in my Figure 1.
As I have noted above, tangible evidence will predominate as grounds for the inferences made by
persons in situation (i). I was asked on one occasion why I do not have a category of evidence
called "statistical evidence". The reason I gave is that all statistical evidence involves counts of
either observed tangible evidence or some form of testimonial evidence, such as encountered in
survey research. Such statistics might also employ counts of missing tangible or testimonial
evidence. In short, I do not believe that statistical evidence involves evidence my categorization
of evidence does not already cover.
6) Concerning the Arcs or Links in Inference Networks
Here I have no disagreement with what Michael has said regarding the forms of inference
networks I showed in my Figure 2 [page 35]. I just have a few things to add about the
interpretation of the arcs [arrows] in these two figures. It turns out that the interpretation of the
arcs in both figures is a matter for controversy. In the Wigmore case in Figure 2A, Wigmore
himself said the arcs represented lines of probative [inferential] force. William Twining says the
arcs here say: "tends to support or tends to negate". I have always said that the arcs on a
Wigmore chart represent probabilistic relevance relations, since this is what the arguments being
charted are intended to capture. There is no temporal or causal intent here; argument charting is
not the same thing as telling a story involving "chains of evidence". It would be a violent non
sequitur to argue that the existence of one item of evidence allows an inference about the
existence of another item of evidence.
Arguments continue about the meaning of the arcs in the process models I illustrated in
Figure 2B. Some argue that they indicate causal linkages and some argue that they only indicate
avenues of probabilistic influences or dependencies. But Michael is correct in saying that we may
usually see process models and not Wigmore charts in situation (i).
7) Arguments Against the Idea of a "Science" of Evidence.
12
Michael argues against the use of the word "science" to characterize the discipline of
evidence. He first notes, as I did in my paper, that some persons not involved in what is normally
called science might be antagonized by such a description. I confess that I have heard about this
possible antagonism but frankly wonder why it should exist. I pose another case of the use of the
term science, it involves the well-established field of computer science. My strong guess is that
anyone Michael could name whose work is in no way associated with situation (i), almost
certainly now uses computers for some purpose in their work, even if just for word-processing. My
equally strong guess is that they are not antagonized by the use of the term computer science.
Why should they be?
Here are two persons who would probably say that their work is not associated with
science, at least not with the activities in situation (i). Consider Marc Geller, who studies the time
at which the Sumerian language became extinct, or Peg Katritzky, who studies theatre
iconography. Why would anyone say that their work is associated with, or is just a subset of
topics in, computer science, just because they use a PC [or a MAC] to type their papers? I doubt
that either of them would object to the use of the term computer science.
People in all walks of life use evidence every day of their lives, including those in
situations (i), (ii), and areas possibly not covered by either of these situations that Michael has
described. My own view is that persons in any area of (ii) or elsewhere should neither be
intimidated in any way or antagonized by use of the term evidence science. No one I know of,
least of all me, will ever claim that their work is a subset of evidence science just because their
work involves the use of evidence. My view, which I believe consistent with the UCL objectives for
the study of a science of evidence, is that it is a field of study that anyone can draw upon for
insights about evidence, and who can even contribute their own insights if they choose to do so.
Thus, folks like Marc and Peg can draw upon evidence science in much the same way that they
now draw upon contributions from the field of computer science.
But. Michael has a few deeper concerns about use of the term "evidence science". He
first notes my reference to the three concepts of science noted by Carnap, saying that there are
other concepts associated with science, such as explanation and understanding, that Carnap
does not mention. I have read over Carnap's account of the three concepts he discusses and find
nowhere in his writings that he considers his listing to be exhaustive. And it is certainly true that
the two concepts Michael mentions are just as important in situation (ii) as they are in situation (i).
But Michael says that there is no ontological process that a science of evidence seeks to explain.
Rather, studies of evidence can only be epistemological in nature concerning what he calls "rules
of the game".
According to my encyclopedia of philosophy10, ontology refers to the investigation of
existence or being. Common questions asked in such investigations are: What exists?, and What
sorts of things are there? I could first argue that many of my studies of evidence have had
ontological objectives. For many years now I have had an interest in studying how many kinds of
individual evidence items, and combinations of them, exist when we do not consider their
substance or content. I have never argued that my listings are final or exhaustive. I have changed
my mind about the categories I have identified in the past, and I will probably do so in the future.
There is so much still to be learned about evidence and its properties and uses. Such research
can involve ontological objectives in the sense I have just mentioned.
I'll turn now to what Michael says about the methods of evidence science being
epistemological rather than ontological. In this connection he notes that I said [page 66] that the
testing in evidence science "seems to be logical rather than empirical in nature". I'll first note that I
discussed [pages 14 -15] the argument offered at the UCL meeting on 7 June, 2005 that a
science of evidence would be contained within the field of epistemology. I also mentioned that
any science [even those associated with situation (i)] involve epistemological issues. And, later
10 Craig, E. The Shorter Routledge Encyclopedia if Philosophy. Routledge, Oxford, 2005, 756
13
[pages 44 - 48], I went on to show how I used [or possibly misused] the standard account of
knowledge in epistemology in identifying attributes of the credibility of witnesses in reporting what
they [allegedly] observed.
Returning to my view of testing in evidence science, on pages 59 - 62 I mentioned how
the use of mathematical expressions for the inferential force of evidence can lead to the telling of
alternative stories about the force of various combinations of evidence when the probabilistic
ingredients of these expressions are varied. As I mentioned, exercises like this are frequently
called "sensitivity analyses". If we agree on the structure of an evidence-based reasoning
situation [using either of the structural devices I mentioned in Figure 2 on page 35] we can
exercise mathematical expressions appropriate to these structures, such as those stemming from
Bayes' rule, to see how they will respond to variations in probabilistic ingredients applied to the
arcs in these models. I also mentioned the heuristic merit of such mathematical investigation
[page 65]. These expressions may suggest questions we might not have thought of asking if we
had not done such analyses. The question is: how do we test the results provided by these
models?
In many [most?] areas in situation (i) mathematical models are routinely used to guide
research in some given area and to suggest questions we might ask of nature. If the questions
are answerable by nature, we design empirical experiments to see whether nature will behave in
the way our models say she will. But model-based or other studies of evidence, such as I
mentioned above, do not lend themselves to such replicable experimentation. So, in the absence
of nature answering our questions about evidence, we do the next best thing which is to ask
ourselves whether what we have said about evidence makes sense and is a complete account of
what we believe is involved in the evidential situation we are studying. This is what I meant by
logical rather than empirical testing.
I note that Michael did not comment on my discussions of how closely a science of
evidence matches the definition of science given by the OED [pages 22 - 23; 65 - 66]. Most of his
comments are based on his distinction between situations (i) and (ii) as he has defined them and
which he says I failed to incorporate in my analysis of a science of evidence. I have not yet
commented upon this distinction in hopes of giving Michael every benefit of doubt. I'll just note
here that not everyone agrees with this distinction. Here comes the view of the noted historian
Edward Hallet Carr11. He begins by saying:
It is alleged that history deals with the unique and particular, and
Science with the general and universal.
This is what Michael has said. But Carr goes on to say that adoption of this view in history would
lead to a "philosophical nirvana, in which nothing that matters could be said about anything. He
argues that historians are not really interested in the unique, but what is general in the unique.
I may be wrong here [and my colleagues William Twining and Terry Anderson will let me
know], but Carr's comment about history brings to mind the concept of stare decisis in the field of
law. This Latin phrase says: to abide by or adhere to settled cases. It is true that all cases in law
have unique evidence. However, stare decisis says that when a court has laid down a principle of
law as applicable to a certain pattern of facts [evidence], it will apply to all future cases in which
the facts are "substantially the same". This sounds to me that the field of law also considers wat
is general in the unique.
I was honestly prepared to give up on the idea of there being a "science" of evidence
when I first began to read Michael's comments. However, I will remain obstinate and still cling to
the views expressed in my thoughts about a science of evidence, even though I have taken so
11 Carr, E. H. What is History? Random House, New York, NY. 1961, 79 -83.
14
many of Michael's thoughts to heart. But I have one more matter to discuss that has caused me
some embarrassment.
8) Sources of Early Oxford Scholarship
Michael correctly notes the contributions of Islamic scholars to that of the Oxford scholars
I mentioned on page 9 - 10. I had earlier remarked on the contributions of these Islamic scholars
and said that they were so often slighted in Western accounts of the history of science. Michael's
comment suggests that I slighted them myself in what I have written about the emergence of the
concept of evidence. I said that my discussion of the emergence of evidence and science would
be "embarrassingly brief" [page 8], here is one example of my embarrassment. I should have
mentioned how scholars like Ibn Sina [Avicenna] and Ibn Rushd [Averroes[ kept the wick turned
up as far as the earlier contributions of the Greeks are concerned. And I should also have
mentioned the frequently overlooked empirical research that persons like Ibn al-Haytham
[Alhazen] performed. I first discovered, years ago, how we can credit person like Alhazen and
Abu Al-Kindi for their works concerning how the human eye works. As I did note, I have never
found any specific writings of early scholars in the Middle East, or elsewhere, on the use of
evidence. But this does not excuse my not writing more about their contributions in science and in
other areas.
9) Conclusion
I will end where I started by thanking Michael for his thoughtful and extensive comments
on my thoughts about a science of evidence. In my past experience I have written what I thought
would be detailed and helpful comments about the works of others. In most instances I never
heard one way or the other about the writers' reactions to my comments or whether, indeed, they
had read my comments at all. I could not let Michael think that I did not take him very seriously on
the comments he made about my thoughts about a science of evidence.
Download