1 Sala Expected Utility Theory and Deviations from it in Medical

advertisement
Expected Utility Theory and Deviations from it in Medical Decision Making
Margaret Sala
Introduction
The topic of the paper is on expected utility theory and deviations from it in medical decisionmaking. Expected utility theory is useful in medical decisions and gives patients and physicians the tools
to make optimal decisions regarding tests and treatments when risk is involved. However, deviance from
the expected utility theory is very common. Prospect theory, framing effects, and heuristics can account
for deviation from the expected utility theory model in medical decision-making.
Literature Review/Relevance to Other Topics
Expected utility theory is used to assess medical risk. Expected utility is the utility that is expected on
the average. In order to use expected utility theory, we must understand probability and it’s tradeoffs with
the disutility of possible outcomes. (Gurmankin, 2005) Utility in medical conditions is often converted to
Quality Adjusted Life Years (QALY). (Baron, 2007) A QALY is the total utility of a year in normal
health. Medicine shows us how expected utility is better in the long run. For example, if many people
choose to have a surgery with .00000001 probability of death, a few people will die, but if the utility of
the medical benefit is high enough, then it’s worth it. (Baron, 2007) The loss to some is more than
compensated by a gain to many others. (Baron, 2007)
There are many strengths to the use of expected utility theory in medicine. Expected utility theory
allows for the integration of patient values with medical facts. (Ubel, 1997) That is, it uses both
information that only a patient posses and probabilistic information. (Ubel, 1997) By using the expected
utility theory in medical decision-making, all the expected utility theory axioms (dominance, transitivity,
independence, etc) are satisfied. (Ubel, 1997) Furthermore, expected utility models closely resemble
linear models, which have proven to be successful in judging and prediction. (Ubel, 1997) By using
expected utility theory, we can accommodate two conflicting probabilities by using one piece of
information and then the other. (Ubel, 1997)
Although expected utility theory provides us with a normative model for medical decision-making, it
is important to note that deviance from the normative model is very common in medical decision-making.
Deviance from the normative model can be explained by prospect theory, framing effects, a difference
between experienced, predicted, and decision utilities, and several biases and heuristics.
Prospect theory is a descriptive theory of decision utility. (Baron, 2007) It predicts deviations from
expected utility theory. (Baron, 2007) Prospect theory explains why subjects are more affected by
differences among high probabilities than small ones when it comes to medical risk. (Gurmankin, 2005)
Framing effects are also very common in medical decision-making. Gigerenzer found that physicians
are better able to judge probability of a woman having breast cancer if data is presented in frequency
format rather than in probability format. (Galanter, 2005)
Medicine shows us that experienced, predicted, and decision utilities differ. Sieff, Dawes, and
Lowenstein (1999) asked people who were tested for HIV to predict how they would feel five weeks after
the test if the results were positive and if they were negative. Those with positive results thought they
Sala 2
would be happier than they ended up being, and those with negative results thought they would be
unhappier than they ended up being. Read and Lowenstein (1999) assessed whether people’s willingness
to accept pain (WTAP) depended on whether they felt pain either a moment or a week earlier and whether
they were focused or distracted from the pain. Read et. al also looked at people’s WTAP when they
hadn’t experienced the pain. Read et. al found that the distraction group displayed less WTAP than the
sensation focused group immediately after experiencing pain, but greater WTAP a week after accepting
the pain. Read et. al argued that sensation-focus reduces the experience of pain, but distraction reduces
the unpleasantness of remembered pain. He argued that remembering pain might sometimes be beneficial
for future decision-making. Also, people who never experienced pain experienced a greater WTAP. Read
and Lowenstein’s experiment shows us how the predicted disutility of pain is influenced by several
factors, and how it is thus different the experienced disutility of pain.
Doctors often fall into the availability heuristic, where they overestimate the frequency of an easily
recalled event and underestimate the frequency of a difficult to recall event. (Galanter, 2005) For
example, psychological illnesses such as bipolar disorder have recently been over diagnosed because they
have been talked about a lot over the last few years. (Galanter, 2005)
Doctors also tend to fall into the representativeness bias, where they diagnose a disorder in a person
based on how similar it is to the description of the disorder, rather than how likely it is for a person to
have the disorder based on it’s frequency of occurrence in the population. (Galanter, 2005) In other
words, physicians ignore base rates when making diagnostic predictions. (Galanter, 2005) For example,
Eddy (1982) asked physicians to estimate a young woman’s risk of having breast cancer given that she
had a positive mammogram. Physicians were provided with the probability of a woman having breast
cancer and the false positive rate. Physicians failed to use Bayes Theorem (the normative way of solving
this problem) and thus overestimated the probability of this woman having breast cancer. (Galanter, 2005)
The confirmation bias occurs when a doctor generates a hypothesis and then seeks evidence that
confirms the hypothesis. (Galanter, 2005)
The individual versus statistical heuristic is also very common. This heuristic is the idea that saving a
single life is seen as correcting 100% of a problem, while a small reduction in the death rate is seen as a
small correction of a problem. (Baron, 2007) Redelmeir et. al (1990) invited physicians to participate in a
medical decision making study, composed of three experiments. In the first experiment, physicians were
presented problems concerning either one patient, or concerning a group of patients. In the second
experiments, doctors were asked to analogously compare problems concerning either one patient or
concerning a group of patients. In the third experiment, undergraduate students were asked to (randomly)
interpret either an individual or aggregate version problem. It was found that both physicians and
laypeople give more concern to individual versus group patient problems.
The omission versus commission bias favors harms of omission over equivalent harms from action.
(Baron, 2007) For example people don’t like the idea of causing death by a vaccine. (Ritov and Baron,
1990) This is very relevant to medicine, as somebody would probably feel worse if they lost sight due to
having a surgery than if they lost sight due to not having a surgery.
Conclusion
We have seen that many topics from this course are relevant to medical decision making,
including:

Expected Utility Theory
Sala 3

Prospect Theory

Framing Effects

Difference between
 Biases and Heuristics: Availability heuristic, representativeness bias, confirmation bias,
individual versus statistical heuristic, and omission versus commission bias
References
Baron, J. (1988, 1994, 2000, 2008). Thinking and Deciding. New York: Cambridge University Press.
Chapman, G. B., & Sonnenberg, F. A. (Eds) (2000) Decision making in health care. New York:
Cambridge University Press.
Eddy, D.M. (1982) Probabilistic reasoning in clinical medicine: Problems and opportunities. In D.
Kahneman, P. Slovic, & A. Traversky (Eds.), Judgment under unceirtanty: Heuristics and biases (p.
249-267). Cambridge University Press.
Galanter, C.A., & Patel, V.L. (2005). Medical decision making: A selective review for child psychiatrists
and psychologists. Journal of Child Psychiatry and Psychology, 46 (7), 675-689.
Gigerenzer, G., Gaissmaier, W., Kurz-Milke, E., Schwartz, L. M., & Woloshin, S. (2007). Helping
doctors and patients make sense of health statistics. Psychological Science in the Public Interest, 8,
No. 2.
Gurmankin, A. D., & Baron, J. (2005). How bad is a 10% chance of losing a toe? Judgments of
probabilistic conditions by doctors and laypeople. Memory and Cognition, 33, 1399-1406.
Read, D. & Loewenstein, G. (1999). Enduring pain for money: Decisions based on the perception and
memory of pain. Journal of Behavioral Decision Making, 12, 1-17.
Redelmeier, D. A., & Tversky, A. (1990). Discrepancy between medical decisions for individual patients
and for groups. New England Journal of Medicine, 322, 1162-1164.
Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: omission bias and ambiguity. Journal of
Behavioral Decision Making, 3, 263-277.
Sieff, E.M., Dawes, R.M. & Loewenstein, G. (1999). Anticipated versus actual reaction to HIV test
results. American Journal of Psychology, 112 (2), 297-311.
Ubel, P. & Loewenstein, G. (1997). The role of decision analysis in informed consent: Choosing between
intuition and systematicity. Social Science and Medicine, 44, 647-656.
Download