Improving psychological critical thinking while reducing

advertisement
A Course Designed
Running head: A COURSE DESIGNED TO IMPROVE PSYCHOLOGICAL
A Course Designed to Improve Psychological Critical Thinking
Suzanna L. Penningroth, Laran H. Despain, and Matt J. Gray
University of Wyoming
1
A Course Designed
2
Abstract
We developed a 1-credit freshman-level course designed to enhance psychological critical
thinking. We based the new curriculum on Stanovich’s (2004) text, with an emphasis on active
learning and critically evaluating claims by applying scientific concepts. To assess the
effectiveness of this course, we used a pretest-posttest design with a quasi-experimental control
group. At posttest, students in the psychological science course showed greater improvement on
psychological critical thinking than students in a comparison group. Therefore, we recommend
the techniques used in this instructional intervention to help college students improve their
critical thinking skills.
A Course Designed
3
A Course Designed to Improve Psychological Critical Thinking
Educators frequently endorse critical thinking as a highly valued outcome in college
classes (Keeley, Browne, & Kreutzer, 1982; Nummedal & Halpern, 1995; Williams, Oliver,
Allin, Winn, & Booher, 2003). More specifically, Lawson (1999) noted the importance of
psychological critical thinking for psychology majors. He defined psychological critical thinking
as the ability to evaluate claims (primarily for psychological issues) by applying scientific
concepts. Examples of these concepts include falsifiability and the difference between
correlation and causation. Lawson and colleagues developed the College of Mount St. Joseph’s
Psychological Critical Thinking Exam (PCTE; Lawson, 1999), which consists of items that each
contain a conclusion and the information on which it is based. The instructions direct students to
explain the problem with each conclusion, if a problem exists. Methodological issues covered by
the PCTE include experimenter bias, singular versus multiple causation, correlation versus
causation, control and comparison groups, and confounding variables. Lawson found that
psychology majors in their senior year scored higher on the exam than chemistry or biology
majors in their senior year, who scored higher than introductory psychology students.
There is little published research on effective techniques for improving psychological
critical thinking specifically. However, Williams et al. (2003) did record significant
improvement in psychological critical thinking (assessed with the PCTE) from pretest to posttest
within a course on human development. Although the sample as a whole improved on this
measure, the effect depended on performance level for the course. Students who scored in the
A/B+ range on combined exam scores showed improved psychological critical thinking at
posttest, but students who scored in the D/F range did not. Training in critical thinking involved
A Course Designed
4
practice in applying the key concepts recommended by Lawson (1999). Students applied these
concepts when answering practice exam questions throughout the course.
Many psychology courses include components that should improve psychological critical
thinking. For example, most General Psychology (GP) textbooks include material on applying
the scientific method and research methods within psychology (e.g., Weiten, 2004). However,
our main goal was to test the effectiveness of a new 1-credit course specifically designed to
improve psychological critical thinking. We wanted to test whether the new Psychological
Science (PS) course led to improvements in psychological critical thinking over and above
improvements seen in a course where the primary goal was not teaching critical thinking.
The PS course incorporated an active learning approach (e.g., Burbach, Matkin, & Fritz,
2004; Yoder & Hochevar, 2005) and centered on Stanovich’s (2004) book, How to Think
Straight About Psychology. This text covers all of the key concepts of psychological critical
thinking identified by Lawson (1999), for example, experimenter bias, singular versus multiple
causation, correlation versus causation, and control groups. Students in a GP course served as a
comparison group at both pretest and posttest. We predicted greater gains in psychological
critical thinking for the PS course than the GP course.
Method
Participants
The PS course included 47 students who were enrolled in one of three sections of the new
1-credit course. These students either had already taken a GP course (n = 15) or were
concurrently enrolled in a GP course (n = 32, with 6 enrolled in the GP section that served as a
comparison group and 26 enrolled in two other sections). The PS course was mainly composed
of freshmen (92%) and sophomores (6%), but did include one senior. The comparison group
A Course Designed
5
was composed of 119 students who were enrolled in a specific GP course and not concurrently
taking the PS course. The PS course and the GP course did not differ with respect to gender or
class standing, but the two groups did differ with respect to academic major. Specifically,
psychology majors made up 36% of the PS course but only 6% of the GP course, X2(1, N = 166)
= 25.0, p < .001.
Procedure and Instructional Strategy
To assess critical thinking about psychological science, we administered the College of
Mount St. Joseph’s PCTE (Lawson, 1999) in both courses at the beginning and end of the
semester. The PCTE consists of 14 one-paragraph items.
The PS course. In general, instructional techniques for the course included active
learning through small group activities and class discussions, frequent quizzes, and a final
writing assignment that involved comparing the results from several studies to evaluate a general
claim. To facilitate active discussion and participation during class, each of the three sections of
the PS course was limited to 20 or fewer students. The first author taught two sections and the
third author taught one section. We (the PS course instructors) planned the curriculum together
and frequently used the same materials across the sections we taught, including quizzes, the final
paper assignment, and many classroom activities.
The course met weekly for 50 min throughout the semester, with each week devoted to an
important behavioral-science concept or methodological consideration. More specifically, we
devoted each class session to a major principle (e.g., experimental control) covered by one of the
chapters in Stanovich’s (2004) text. Each week, students read the relevant chapter and then
heard an informational overview of the principle from the instructor. To increase compliance
A Course Designed
6
with weekly readings and enhance mastery of the material, we administered six pop quizzes.
These quizzes contained multiple-choice, true/false, and fill-in-the-blank items.
In class, we often divided the class into small groups to work on activities. As an
example, groups received descriptions of actual behavioral-science findings that were
compromised by a methodological flaw related to the principle being considered. For instance,
when studying the importance of experimental control, groups first read descriptions of
correlational studies and then identified prominent confounds. They also suggested
corresponding control conditions for experimental designs.
In a final assignment, each student read and critically evaluated an empirical “literature”
bearing on a controversial claim about human behavior. For example, one of the controversial
claims was “using a cell phone when driving causes more accidents.” For each claim, we
constructed four one-paragraph summaries of hypothetical studies, two studies supporting the
claim and two studies refuting the claim. We developed four different topics (claims) and
assigned topics to individual students. For every topic, the four relevant studies had different
methodological strengths and weaknesses. In the context of a brief paper (3-5 pages), students
evaluated the merits and shortcomings of each study, considered the relative importance of the
various strengths and weaknesses, and proffered an opinion about the veracity of the claim given
the relative quality of supporting and nonsupporting studies.
It is important to note that the instructional materials and activities for the PS course did
not include items taken from the PCTE or items similar to those in the PCTE. We wanted to test
whether our methods led to improvements in critical thinking that generalized beyond the
specific materials used, so we were careful to avoid using examples from the PCTE in our
classes.
A Course Designed
7
The GP course. The GP course that served as a comparison group was a large (178
students officially enrolled) lecture-format class that met 3 hr per week. Course grades were
determined by five multiple choice tests that covered material from both lecture and the
textbook. This GP course included some focused instruction in critical thinking. Specifically,
the assigned textbook (Weiten, 2004) included a two-page section on critical thinking
applications for each of the 16 chapters. Thus, in addition to the standard chapter on research
methods included in most GP textbooks, this text provided additional material on critical
thinking concepts, for example, examining contradictory evidence and considering alternative
explanations. As part of the text material, this information was covered by course exams.
Therefore, this specific GP course provided a strong control group for testing the effectiveness of
the PS course.
Results
For all analyses, we used two-tailed tests with an alpha level of .05.
Scoring of the PCTE and Inter-rater Reliability
To develop the scoring key, we began with the key used by Lawson (1999). The first and
third authors then each provided greatly expanded answer keys for half the items, and the rater
(the second author, a graduate student in psychology) scored all the tests. We also changed the
point options from one point per item to two points per item. This scoring system allowed us to
differentiate more finely the correctness of answers. Therefore, the possible range of scores was
0–28. The rater scored all tests while blind to group and pretest/posttest condition.
The third author also independently scored 30 (9%) of the exams (sampled from all
conditions and while blind to conditions). Inter-rater reliability was calculated as the correlation
between overall scores assigned by each rater. Reliability was high for both pretest r(15) = .93, p
A Course Designed
8
< .001, and posttest r(15) = .94, p < .001. Therefore, we used the original rater’s scores for all
analyses.
Psychological Critical Thinking Scores
We used a mixed 2 (time) x 2 (group) ANOVA to assess differences in psychological
critical thinking, with time (pretest versus posttest) assessed within participants and group (PS
course versus GP course) assessed between participants. Scores ranged widely for both the
pretest (0-19) and posttest (0-23). There was a main effect of time, F(1, 165) = 41.10, p < .001,
η2 = .20, with higher scores at posttest (M = 8.89, SD = 4.90) than at pretest (M = 6.64, SD =
4.05). There was also a main effect of group, F(1, 164) = 28.39, p < .001, η2 = .15, with higher
scores overall for the PS course (M = 10.14, SD = 3.60) than the GP course (M = 6.83, SD =
3.60).
The key prediction, however, was for an interaction between time and group. This
prediction was confirmed, F(1, 164) = 75.90, p < .001, η2 = .32. Planned comparisons revealed
that the two groups did not differ in psychological critical thinking scores at pretest, t(164) =
0.71, p > .10, η2 < .01 (PS course M = 7.00, SD = 4.74; GP course M = 6.50, SD = 3.76), but at
posttest, scores were higher for the PS course (M = 13.28, SD = 4.72) than the GP course (M =
7.16, SD = 3.78), t(164) = 8.74, p < .001, η2 = .32. We also examined the interaction effect in
terms of improvement within each of the groups from pretest to posttest. As predicted,
improvement in the PS course was statistically significant, t(46) = 9.22, p < .001, η2 = .65.
Improvement in the GP course was also statistically significant although the effect size was
much smaller, t(118) = 2.16, p = .03, η2 = .04.
We also tested whether the improvement shown in the PS course differed for students
with low versus high grades in the course. Course grades reflected performance on pop quizzes
A Course Designed
9
(50% of grade), contributions to class discussions (25%), and performance on the final research
evaluation paper (25%). We categorized the 47 PS students as either high performers (course
grade of A or B; i.e., earning at least 80% of points; n = 38) or low performers (course grade of
C, D, or F; i.e., earning less than 80% of points; n = 9). A mixed 2 (time) x 2 (course
performance) ANOVA, with time assessed within participants and course performance assessed
between participants, revealed a significant interaction, F(1, 45) = 5.64, p = .02, η2 = .11. High
performers improved their scores significantly from pretest (M = 6.92, SD = 4.53) to posttest (M
= 13.95, SD = 4.67), t(37) = 10.46, p < .001, η2 = .75. Although low performers showed the
same pattern of results, with some improvement from pretest (M = 7.33, SD = 5.83) to posttest
(M = 10.44, SD = 3.97), this difference did not reach statistical significance, t(8) = 1.65, p > .10,
η2 = .26. However, this failure to reach statistical significance appears to be caused by low
power (.31) due to the small sample size rather than the effect size, which registered in the
“medium” range (Cohen, 1992).
Discussion
Our goal was to improve psychological critical thinking scores with a course that met 50
min per week for one semester. Results showed that we achieved this goal. That is, at the end of
the semester, the average score on the PCTE (Lawson, 1999) was almost 1.5 standard deviations
higher in the PS course than in the GP course. Yet, these groups had shown equivalent pretest
scores on the PCTE. Based on these results, we recommend this 1-credit course as an effective
method for teaching psychological critical thinking.
Williams et al. (2003) also reported improvement on the PCTE after an instructional
intervention. We replicated the effect found in Williams et al., but we did so after strengthening
the research design by adding a quasi-experimental control group. In both Williams et al. and
A Course Designed
10
the present study, the intervention group showed a low performance level on the PCTE at pretest,
but significant improvement at posttest. In the Williams et al. study, this improvement was
limited to high performers in the course. In contrast, we found some evidence for improvement
among low performers, although their gains on the PCTE were not as large as gains made by
high performers and, in fact, failed to reach statistical significance, presumably because of the
small sample size for this group. The two studies differed in many ways (e.g., definitions of low
vs. high performers and the specific instructional intervention), so future research is needed to
tease apart the conditions that lead to improvement for students at different levels.
In the present study, we used a quasi-experimental design. Although our conclusions
would be even better supported by a true experiment, we could not randomly assign students to
classes. Also, by using both a pretest and posttest, we assessed whether the two groups were
comparable before the instructional intervention. At pretest, the only difference between the two
groups was the percentage of psychology majors (36% of the PS group and 6% of the GP group).
We do not think this difference accounts for the effects observed for two reasons. First, pretest
scores on psychological critical thinking were the same for the PS course and the GP course.
Second, when we re-analyzed the data after excluding all psychology majors, the results did not
change.
Improved critical thinking may not have been caused by the PS course specifically, but
by the experience of taking any one-credit course in science. In other words, the gains made by
the PS students could be due to having a greater number of hours in science coursework.
However, this possibility seems unlikely because the PS course had a higher percentage of
freshmen (92%) than the GP course (75%), which meant that, at least at pretest, the PS students
had actually accumulated fewer credit hours on average than the GP students. It seems plausible
A Course Designed
11
that students in the PS course also had fewer credit hours in the sciences specifically, at least at
pretest. Nevertheless, this alternative explanation deserves further research. For example, by
adding another science course as a comparison group (e.g., a biology lab), researchers could test
whether the PS course leads to improvements over and above any gains from general science
instruction.
One practical limitation of the present study concerns applying this instructional
intervention within existing curricula. It may not be possible to offer an entire 1-credit course on
psychological science. We offer a few suggestions for instructors who would like to integrate
the key techniques of the PS course into an existing class (e.g., a research methods class).
Because we did not experimentally isolate the critical components that led to improved
psychological critical thinking, these suggestions are based mainly on our informal evaluations.
However, the effectiveness of these techniques has also been demonstrated in studies of
knowledge acquisition and knowledge transfer, for example, in laboratory studies on problem
solving (e.g., Novick, 1988) and classroom studies of general critical thinking (Burbach et al.,
2004).
Our subjective analysis led to three general features of the course that seemed to support
improvements in psychological critical thinking. The first general feature is the requirement that
students have the requisite knowledge base. Research in problem solving has shown that
knowledge in a domain (expertise) is associated with better transfer of learning when the
individual encounters new problems (e.g., Novick, 1988; Novick & Holyoak, 1991). To ensure
that our students had the requisite knowledge base, we assigned all 12 chapters from the
Stanovich (2004) text and administered pop quizzes based on the readings. If less class time is
A Course Designed
12
available, other options include using homework assignments based on assigned readings
(instead of in-class quizzes) or using fewer chapters from the Stanovich (2004) text.
The second general feature is an emphasis on active learning with feedback. In past
research with college student populations, active learning techniques such as discussions and
small-group exercises have resulted in improved scores on class exams (Yoder & Hochevar,
2005) and improved scores on general critical thinking tests (Burbach et al., 2004). In the PS
course, we had the luxury of small class sizes, and we relied heavily on small group activities
and discussions. We perceived these methods to be very effective for teaching critical thinking.
Therefore, we encourage instructors to include some small-group activities throughout the term,
for example, once in every major unit. Even very large classes can incorporate small-group
activities. For example, instructors can randomly call on groups to share their ideas or require all
groups to turn in written summaries of the ideas generated.
The third general feature we recommend is the opportunity for students to apply the
newly learned concepts. Research on problem solving has demonstrated better skill transfer in
college students when they receive problem-oriented training than when they receive memoryoriented training (Needham & Begg, 1991). Similarly, practice in critically analyzing evidence
and conclusions should lead to more generalizable critical thinking skills. In the PS course, we
incorporated problem-oriented training by asking students to provide new examples for concepts
and to identify methodological flaws in summaries of studies. The final paper also required that
they apply their knowledge in analyzing hypothetical studies for strengths and weaknesses.
Instructors can adapt these practices for use within an existing course. For example, in a content
course such as cognitive psychology, instructors can adapt the final paper assignment we used,
substituting summaries of real studies on both sides of a controversial issue in the particular field
A Course Designed
13
(we used hypothetical studies). Finally, instructors with only minimal time to devote to this
method can use the College of Mount St. Joseph’s PCTE (Lawson, 1999) as a learning tool, for
example, administering the exam as a pretest and then providing feedback on answers.
A Course Designed
14
References
Burbach, M. E., Matkin, G. S., & Fritz, S. M. (2004). Teaching critical thinking in an
introductory leadership course utilizing active learning strategies: A confirmatory study.
College Student Journal, 38, 482–493.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155-159.
Keeley, S. M., Browne, M. N., & Kreutzer, J. S. (1982). A comparison of freshmen and seniors
on general and specific essay tests of critical thinking. Research in Higher Education,
17, 139-154.
Lawson, T. J. (1999). Assessing psychological critical thinking as a learning outcome for
psychology majors. Teaching of Psychology, 26, 207-209.
Needham, D. R., & Begg, I. M. (1991). Problem-oriented training promotes spontaneous
analogical transfer: Memory-oriented training promotes memory for training. Memory &
Cognition, 19, 543-557.
Novick, L. R. (1988). Analogical transfer, problem similarity, and expertise. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 14, 510–520.
Novick, L. R. , & Holyoak, K. J. (1991). Mathematical problem solving by analogy. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 17, 398-415.
Nummedal, S. G., & Halpern, D. F. (1995). Introduction: Making the case for “Psychologists
Teach Critical Thinking.” Teaching of Psychology, 22, 4-5.
Stanovich, K. E. (2004). How to think straight about psychology (7th ed.). Boston: Allyn and
Bacon.
Weiten, W. (2004). Psychology: Themes and variations (6th ed.). Belmont, CA: Wadsworth.
A Course Designed
15
Williams, R. L., Oliver, R., Allin, J. L., Winn, B., & Booher, C. S. (2003). Psychological
critical thinking as a course predictor and outcome variable. Teaching of Psychology, 30,
220-223.
Yoder, J. D., & Hochevar, C. M. (2005). Encouraging active learning can improve students’
performance on examinations. Teaching of Psychology, 32, 91-95.
A Course Designed
16
Notes
1. We thank Tim Lawson for sending us the College of Mount St. Joseph’s PCTE and his
scoring key for the exam; Eric Dearing for allowing us to use his General Psychology class as a
comparison group; and Walter D. Scott, Randy Smith, and three anonymous reviewers for
feedback on an earlier draft.
2. We presented portions of this study at the annual meeting of the American Psychological
Association, New Orleans, LA, August, 2006.
3. Send correspondence to Suzanna L. Penningroth, Department of Psychology, Department
3415, University of Wyoming, 1000 E. University Avenue, Laramie, WY 82071; e-mail:
spenning@uwyo.edu.
Download