Newcomb’s problem

advertisement
Newcomb’s problem
Newcomb’s problem is a puzzle, not a paradox. It is a decision problem: you will be
offered a hypothetical choice, asked what you would choose if offered the choice. As
with most little artificial decision problems, the specification of the problem will make
clear, first, what you should believe, and to what degree, and second, what your
preferences are between the possible outcomes of your choice.
Let me present the problem, and then we can see why it is a puzzle.
You are presented with two boxes, an opaque box that contains either a million
dollars or nothing, and a transparent box that contains, as you can see, a thousand
dollars. Your options are (ONE) to choose the contents of the opaque box alone, or
(BOTH) to choose the contents of both of the boxes. The money is already in the
opaque box, or not, so your choice has no influence on how much money is there.
But here is how it was decided whether the million dollars was placed in the box: A
predictor (perhaps a cognitive psychologist who has access to the result of some
tests you have taken, before you heard about this problem.) has predicted, based on
her knowledge of your psychological profile, predicted what choice you would
make, and put the money in the box if and only if she predicted that you would
choose only one box. She has done this experiment many times with varied
subjects, and have proves right about 90% of the time. About half the subjects
choose one box, and the predictor is right 90% of the time, both with those who
choose one box, and with those who choose both. What should you do?
We will try to develop, first some arguments for an against each choice, and then some
theory to help explain the reasons for each choice.
Here is a hypothesis about how the predictor is able to be so accurate that might come to
mind after the discussion of time travel: she traveled into the future and observed you
making the choice. Then went back a time earlier than the time of choice and either put
the money in the box, or not, depending on what you did (or rather what you will do –
tenses are hard to use in time travel stories. Or slightly simpler: she is clairvoyant, and
observed you, through a slightly cloudy crystal ball, making your choice. Both
clairvoyance involve backward causation (events at a later time cause events at an earlier
time). Whether backward causation is possible at all is controversial, and consideration of
it complicates the problem, but the problem arises without that, so let’s assume no
backward causation. Suppose that the right hypothesis about the success of the predictor
is that she is a psychologist who has examined a battery of tests you took before you ever
heard about this problem. It turns out (let’s suppose) that evidence shows that behavior in
the Newcomb problem is reliably predictable from the psychological profile revealed in
your tests. Even if this is not possible today, it is not a very unrealistic possibility.
In discussing time travel, the distinction was made between metaphysical and epistemic
possibility. There is a closely related distinction between causal and epistemic
dependence. The Newcomb problem is a situation in which the predictor’s prediction,
and the placing of the money in the box (or not) is causally independent of your choice
(since it takes place earlier, and we are ruling out backward causastion), but epistemically
dependent on it (since your choice gives you evidence about the predictor’s choice, and
about whether there is money in the box. But is it epistemic or causal independence that
is relevant to choice?
The outcome of you decision depends on two things: (1) what you decide to do, and (2)
what the facts are about the environment in which you choose. In the case where the
facts about the environment are independent of your choice, you can reason by division
of cases. The money is in the box, or not. If it is there, you prefer the contents of both
boxes (since you get a million plus a thousand, rather than just a million), and if the
money is not in the box, you also prefer the two-box choice (since it gets you a thousand
rather than nothing). If it is causal independence that is relevant, this is a good argument
for choosing both boxes, but if it is epistemic independence that matters, then it is not a
good argument. The debate about the relevance of the two kinds of dependence brings
out questions about free choice, the agent’s perspective, and a spectator view vs. the
agent view
Imagine that you are a passive observer of yourself. Two parts of you, the agent and the
observer. The agent is deciding whether to take one box, or both. The observer is hoping
that the agent picks just one because she hopes the agent is the type of person who
chooses just the one box. But the agent may still think it is more rational to choose both.
You learning about yourself by finding out what you decide to do in a difficult situation,
say one requiring unusual bravery. In that kind of situation, you are partly learning what
kind of person you have been all along and partly making yourself into a certain kind of
person. But in the Newcomb problem, it is a purer case where your action teaches you
something about what kind of person you have been. The puzzle brings out a tension
between the spectator and the actor.
It is unsettling to think that someone else knows what you are going to do when you have
not yet decided what to do, but we recognize that this happens. You have a job offer,
with many attractions, but also with risks and drawbacks. You are tempted, but still
unsure whether to accept it. Your friend says to someone, with considerable assurance,
“In the end he won’t take it.” This is a familiar phenomenon, but the Newcomb problem
is a striking and extreme case.
Download