Talk Outline

advertisement
What Fallacies are not (Outline)
Frank Zenker (frank.zenker@fil.lu.se)
1. Two Examples of experimentally deployed Reasoning Tasks
 Introduce the terms to be used (standard S, task T, results R, context C)
 Intended use: “To deploy S vis-à-vis T/I in C yields R under normal conditions.”
 Discussion of the “Linda Problem” (Conjunction Fallacy) and the Prevalence Error (Base-Rate
Fallacy) as examples of classical reasoning tasks with well-reproducible (“robust”) results. In each
case, subjects’ answers generally (though not exclusively) violate the normative standard.
 Point out that experimenters need to assume one or the other, but always exactly one normative
standard for the experiment to be meaningful (“No standard, no fallacy”)
 Claim that the standard remains implicit, and that successfully understanding the task requires
experimenters and subjects to coordinate on one and the same interpretation of the task.
 Present three standard responses to the experimental results (based on Cohen 1981): (i) subjects
solve the other task (so meaning coordination fails); (ii) they use a different normative standard
(that other standard is often called a heuristics, i.e., a rough and ready search rule); (iii) the result is
a mere experimental effect, so nothing can be inferred about behavior outside the experimental
context (doubting the experiment’s external validity).
 Present the currently prevailing interpretation (other tasks solved; other standard used)
 Claim that the explanation “a heuristic interferes” or “System 1 engages” merely re-describes the
data in handy terms, and so is not very “informative.”
2. Explicate and Problematize Standard Assumptions
 Claim that the relevant experimental research is largely data-driven; there is “little theory” in the
making that might deserve the name when compared with physical theories (standard problem in
contemporary psychology, but also in other social sciences). Normally, only part of the data will be
predictable, and so the spread of data (variance) cannot readily be explained.
 Present evidence that The “Experimental Effect-Hypothesis” (Cohen’s third explanation) can be
(again experimentally) supported by varying the task, e.g., by presenting probability tasks in a
frequency format with allegedly “redundant” information.
 Claim that such information in fact makes clear the task to be solved, so is instrumental in
coordinating meaning between experimenter and subjects. Results often (though not always)
improve significantly in the sense that most subjects then give the normatively correct answer. So
errors can be “a function of task-design.”
 List conditions that must obtain for a reasoning experiment to diagnose a reasoning error (rather
than a meaning coordination problem, or the unsuccessful acquisition of a task-solution strategy).
 Claim that it is trivial to observe that subjects who have no access to (“do not know”) a normative
standard, or how to deploy it, will not display behavior that is in line with that standard.
 Point to contexts that different from the experimental context (aka. natural contexts) in which
deploying the “deviant” normative standard S* may deliver acceptable results, provided certain
conditions on the environment.
 Thus introduce the ecological rationality research program and present two examples of heuristics
that are well-established to yield optimal results, provided conditions on the environment/context.
3. Minor Conceptual Analysis: “SUBJ commits a reasoning error, if …”




Introduce two kinds of errors (strategy vs. execution errors).
Claim that these remain under-described by the outcome or result (R) of a reasoning episode.
Present six sufficient conditions under which a subject commits a reasoning error.
Explain the variants that arise with respect to reasoning-outcomes (i.e., subjects’ answers to T).
4. Discussion: What can argumentation theorists take from this?
 Explanatory value: Assume that two models are available, e.g., one licensed by probability theory,
the other by some heuristic (e.g., representativeness heuristics). The error is then explained away,
i.e., is no longer an error. But still not explained is why some subjects deployed S to T in C while
others deployed S* to T (or T*). This is generally an “open task” in the research tradition that
argumentation theorist have now started to engage with.
 Use value: Argumentation theorist currently, by and large, uncritically “suck up” a popularized (and
somewhat “dumbed down”) version of the heuristics and biases program, namely the results
arising from the work of Tversky and Kahneman. A critical engagement with the ecological
rationality-side of this program would lead to a “more balanced diet,” and is desirable.
 Many argumentation theorists have already accepted that fallacies are context dependent, i.e., an
argument instantiating a certain form or structure need not be a fallacious move provided
conditions are fulfilled that eschew from the context or the participants’ doxastic states.
 The ecological rationality program investigates exactly these conditions, and so these conditions
could be used in argument evaluation to tell a fallacy apart from an OK-move. Open task!
 Similarly, it makes sense to teach (in critical thinking courses) what these conditions are. So there is
a chance, and a need, to improve student’s selection of solution-strategies. It is not enough to
teach strategies, but also ways of identifying when a strategy should (better) be applied.
 The hope, then, is that, by better being able to identify a ‘task-plus-environment’—as one in which
a particular strategy is promising, while others are not or less so—that students cease to apply
standard S* to T in C, because they will no longer mistake C for C* (or T for T*) and thus will deploy
S, again provided S has been successfully acquired in the first place.
Lund, 27 NOV 2013
Download