FACULTY FORUM Grade Point Average and Changes in (Great) Grade Expectations Craig A. Wendorf Wayne State University This study examined students’ grade expectations over the course of a semester. Students provided their expectations at 3 times: within the first week of the semester, midway through the semester, and within the week just prior to the final exam. Results demonstrate that expected grades decreased over the semester and that the rate of change interacted with students’ cumulative grade point average. Neither the course nor the section of the course (instructor) had a significant influence on grade expectations. Relative to their grade point average, the majority of students maintained rosy grade expectations despite regular feedback on coursework. Considerable research on student evaluations of teaching and instruction has involved, in some manner or another, a focus on grades. Most notably, some researchers (e.g., Greenwald & Gillmore, 1997) have contended that grading leniency has a direct and positive influence on evaluations of instructors, such that higher grades lead directly to higher evaluations. The intention, however, is not to focus directly on that issue here—Marsh and Roche (2000) addressed the controversy over grading biases and leniency—but rather on the measure of students’ expected grades, which is arguably the most commonly used index in studies involving students’ grades. The goal of this study was to examine changes in expected grades over the course of a semester, particularly as a function of students’ prior performance in other college courses (i.e., GPA [grade point average]). Typically, researchers and educators use expected grades as a measure of students’ actual performance against instructors’ standards. The implicit assumption in the use of this measure is that students’ perceptions are reasonable summative statements about their performance. When asked at the beginning of the semester, the single best prediction a student could make about his or her future performance would be his or her mean performance in the past. Therefore, one might expect that, all other things held constant, students’ initial expected grades should be roughly equivalent to their GPAs. Similarly, one might expect that students’ expected grades at the end of the semester should be systematically (but not necessarily perfectly) related to their GPAs. However, an expected grade, as a measure of a student’s actual grade, is fallible. An expected grade is a subjective measure of a student’s performance in the class from the student’s point of view and, as such, it is prone to biases. Landrum (1999), for example, showed that students may have expectations of grade inflation. That is, despite full knowledge that his or her performance in the class is likely to 136 be merely average, a student may expect to receive an A or B. Thus, the grades students expect to receive may not be systematically accurate. These issues beg this question: To what extent are students’ grade expectations aligned with their GPAs? Furthermore, do these expectations change over the course of the semester as students learn more about their actual performance in the course? Method As part of a larger study at a major urban Midwestern university, undergraduates from 24 different sections of eight courses—Introductory Psychology, Psychology of Adjustment, Developmental Psychology, Social Psychology, Introductory Statistics, Perception, Personality Theory, and Industrial/Organizational Psychology—completed surveys. A total of 388 students participated in all three assessments. Sometime within the first week of classes, students indicated the grade they expected to receive in the course. Students estimated their expected course letter grades using a 5-point scale— ranging from 0 (expectation of an F) to 4 (expectation of an A)—designed to match the scaling most often used in the calculation of GPA. Students responded to an identical question midway through the semester (Week 7 or 8 during a 15-week semester) after having taken at least one exam in the course. Finally, students provided their expected grades again in the last week of the semester before final exams. Students’ responses were matched using a unique identifying code that students provided at all three time points. Within the first assessment, students provided other demographic and personal history information. Most important, students indicated, within set ranges, their GPA. Students fell into five self-reported GPA categories: (a) no GPA or GPA unknown (26%), (b) 3.50 to 4.00 (27%), (c) 3.00 to 3.49 (25%), (d) 2.50 to 2.99 (18%), and (e) 2.00 to 2.49 (4%). Students also noted their gender and ethnicity; however, preliminary analyses showed no significant differences involving these variables. Results The relationship between GPA and changes in grade expectations was examined using a mixed within-subjects design (Keppel, 1991), with time representing the within-subjects effect and GPA range representing the between-subjects factor. Additionally, course and section represented between-subjects factors; note, however, that the effect of class section is nested within the effect for course. Although these effects may not be entirely interesting in themselves, they control for any biasing influences of instruc- Teaching of Psychology tors (e.g., effectiveness and grading standards) and courses (e.g., course difficulty and course level; for more on nested structures and nonindependent raters, see Kenny, Kashy, & Bolger, 1998.) Although there were no significant effects involving courses or sections within courses, ps > .05, students with different GPA ranges showed significantly different expected grades, F(4, 303) = 6.31, p < .001. More important, students showed significant change in their grade expectations over the course of the semester, F(2, 606) = 31.99, p < .001, and the extent to which change occurred varied as a function of GPA, F(8, 606) = 2.26, p < .05 (see Figure 1).1 Analyses of the simple effects and pairwise comparisons (cf. Keppel, 1991) were consistent with the differences apparent in Figure 1. For example, at the first assessment only, significant differences in grade expectations emerged between GPA ranges, F(4, 303) = 5.54, p < .001; this finding was due to the differences between the lowest GPA range and the others. At the second assessment, the simple effect for GPA range was again significant, F(4, 303) = 3.51, p < .05; students who had the highest GPA and those who did not know their GPA showed significantly higher expected grades than the other three groups. Finally, significant differences among the groups existed at the last assessment, F(4, 303) = 9.87, p < .001. Students with the highest GPA showed higher grade expectations than all others. Discussion In general, students’ grade expectations were not consistent with their GPA level. All categories of students—except those with the highest GPAs (i.e., 3.50 to 4.00)—initially estimated their future grade at a higher level than their GPAs. Although students’ grade expectations tended to regress toward their GPAs, this same pattern of high estimation was true of expected grades at the end of the semester. Here again students’ grade expectations were considerably above the grades they had received in the past. Perhaps these findings indicate students erring on the side of optimism; despite previous course performance, students are initially extremely optimistic about the grades they expect to receive. If such early optimism is the result of students’ trust in their ability to overcome past performance, it may indeed be commendable. However, if such early optimism is instead a result of students’ expectation of a favorable “curve” or grade inflation (Landrum, 1999), then perhaps there is cause for concern. However, students do not appear completely oblivious to important baseline information (e.g., GPA) in determining their expectations. Disregarding the students who did not know their GPA, the relationship between expected grades and GPA was invariant with respect to rank order. Students with higher GPAs had higher grade expectations, whether 1 This interaction persisted even if the students who did not know their grade point average were omitted. Thus, the rates of change in grade expectation still varied as a function of grade point average range. Vol. 29, No. 2, 2002 Figure 1. Changes in students’ mean grade expectations as a function of grade point average. measured at the beginning of the semester or just prior to the final exam. This finding suggests that, although the overall level of GPA may be a bad indicator of expected grade,2 GPA may be an adequate predictor of expected grade variance or differences. With regard to changing their grade expectations over the semester, students who did not know their GPAs represent a particularly interesting group. They were largely, although not entirely, first-semester college freshmen taking Introductory Psychology. It seems theoretically possible that these freshmen could have different grade expectations than upper level students who do not know their GPAs; additional analyses demonstrated that, although they had slightly lower expectations, freshmen did not have significantly different grade expectations from those upperclassmen. Thus, for all students who did not know their GPAs, grade expectations remained particularly high through midsemester, only to regress to the overall (optimistic?) mean in the final week of the semester. Informal polling of the students’ instructors during the semester following survey administration suggests that the expected grade distributions from the third assessment did not match the actual final grade distributions for the classes, despite the fact that all classes received regular feedback on papers and exams in the form of grades. This discrepancy may be due to two possible causes. First, the lack of match may be due to a biased sample; perhaps those students who were expecting low grades stopped attending the course or simply were not present for the final assessment, thereby producing decreased heterogeneity in the students surveyed. Second, perhaps students improperly gauge their grades; they may largely fail to or simply can not properly estimate the grades they will receive. 2 It should be noted that self-reported and unverified grade point averages, similar to expected grades, are fallible measures and are subject to misrepresentations. I thank an anonymous reviewer for making this point particularly lucid. 137 These findings have direct implications for instructors and researchers. Instructors may be concerned with unrealistically high grade expectations because “one way to ensure dissatisfaction is to have expectations that one can not easily meet” (Gaultney & Cann, 2001, p. 86). One might speculate that students who, for whatever reason, do not accurately estimate their expected grades may be more likely to suffer disillusionment following the receipt of the actual grade, more likely to misunderstand the content of the course, and less likely to base their evaluations of the instructor on the quality of instruction. Perhaps an increase in both the frequency and the amount of course-related information and feedback given to students at all points throughout the semester may help to decrease the magnitude of students’ rosy grade expectations. However, from a researcher’s standpoint, such bias in expected grades does not necessarily discount the utility of expected grades as a measure. If research focuses on students’ perceptions, such as satisfaction with grades or satisfaction with the instructors, then expected grades—an inherently subjective index—may prove to be the most valid and useful correlate. References Gaultney, J. F., & Cann, A. (2001). Grade expectations. Teaching of Psychology, 28, 84–87. Greenwald, A. G., & Gillmore, G. M. (1997). Grading leniency is a removable contaminant of student ratings. American Psychologist, 52, 1209–1217. Kenny, D. A., Kashy, D. A., & Bolger, N. (1998). Data analysis in social psychology. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), The handbook of social psychology (4th ed., Vol. 1, pp. 233–265). Boston: McGraw-Hill. Keppel, G. (1991). Design and analysis: A researcher’s handbook (3rd ed.). Upper Saddle River, NJ: Prentice Hall. Landrum, R. E. (1999). Student expectations of grade inflation. Journal of Research and Development in Education, 32, 124–128. Marsh, H. W., & Roche, L. A. (2000). Effects of grading leniency and low workload on students’ evaluations of teaching: Popular myth, bias, validity, or innocent bystanders? Journal of Educational Psychology, 92, 202–228. on the exam; we identified 7 distinct reasons. Test performance was significantly related to the total number of writing marks, the total number of reasons identified, and 3 specific writing strategies: highlighting key terms, marking questions that needed further examination, and drawing figures or diagrams. Survey results indicated that the majority of students strongly supported the opportunity to write on tests and that they believed the strategy leads to better performance. We discuss the implications of these results in terms of modern testing formats that preclude the use of writing strategies. Recognizing the widespread use of objective tests, many universities offer testing skills courses to help students improve their performance on multiple-choice examinations. Through these classes and through various test-taking handbooks (e.g., ACT, 1997; College Entrance Examination Board, 2000), students learn several strategies, some of which require them to write directly on their exams. For example, students learn to highlight important words, eliminate incorrect alternatives, and mark questions that need further review. Given the increasing popularity of computerized testing formats that preclude writing on examinations, it is important to understand why students write on tests to determine if nontraditional evaluation methods are likely to impact performance negatively. Although researchers have investigated various aspects of test taking, such as answer changing (e.g., Benjamin, Cavell, & Shallenberger, 1984; Shatz & Best, 1987), guessing (Shatz, 1985), and the effectiveness of test-taking skills programs (e.g., Bowering & Wetmore, 1997), only one study has explored the reasons why students write on exams. Kim and Goetz (1993) examined the marks students made on test booklets and determined that students wrote on the test for a variety of reasons. Only one strategy, option elimination, was significantly related to test performance. The purpose of this study was to further investigate writing strategies on tests. In contrast to Kim and Goetz (1993), students (vs. the researchers) identified their reasons for writing on test booklets. Furthermore, students completed a survey to assess their attitudes toward writing on tests and their general use of the strategy. Note Method Send correspondence to Craig A. Wendorf, Department of Psychology, D241 Science Center, University of Wisconsin, Stevens Point, WI 54481; e-mail: cwendorf@uwsp.edu. Participants Students’ Reasons for Writing on Multiple-Choice Examinations Participants were 142 students (39 men and 103 women; M age = 21.2 years, SD = 6.09) enrolled in five different introductory psychology classes. All students participated voluntarily for extra credit. Frank M. LoSchiavo Mark A. Shatz Ohio University–Zanesville Introductory psychology students (N = 142) identified their reasons for writing on a 50-item multiple-choice test and completed a test-taking strategies questionnaire. The majority of students wrote 138 Procedure We collected data from the first of five multiple-choice examinations administered during the term. We administered a similar 50-item multiple-choice test in two sections taught by the first author and in three sections taught by the second author. Teaching of Psychology Students completed the exam during the first hour of a 145-min class. Written instructions directed students to “use a pencil to record answers on the computerized answer sheet” and to “do as you please with the test booklet.” On completion of the exam, the instructor scored the computerized answer sheets during a 15-min break. The instructor returned the test materials and reviewed the exam on an item-by-item basis. The instructor then gave students the opportunity to participate in the study (two students declined). The students reviewed their test booklets, and for each instance of writing on the test, they explained their reason on a separate form. Students who did not write on the test explained why they chose not to use the strategy. Afterward, all students completed a brief survey. To assess their attitudes regarding writing on tests, students responded to four items using a 5-point Likert scale. To assess how frequently they wrote on other tests, students responded to one item using the following scale: never, seldom, occasionally, frequently, and always. Results Overall, 56% of students wrote on the exam, with women (63%) more likely than men (36%) to do so, χ2(1, N = 142) = 8.49, p < .01. When listing their reasons for writing, students often used different phrasing to describe the same basic reason. For example, one student may have stated that he or she “crossed out wrong answers,” whereas another student may have stated that he or she “eliminated incorrect options.” To address this issue, we examined a subset of the responses and developed categories for classifying students’ reasons. We identified seven student-generated reasons for writing on the exam. Two coders then worked independently to classify each response. Interrater reliability was high, as coders agreed on 99% of the cases. On average, writers wrote on 17.95 (SD = 19.46) items, made 26.25 (SD = 33.30) separate writing marks, and identified 2.73 (SD = 1.75) unique reasons. As predicted, the total number of writing marks was significantly related to test performance, r(140) = .15, p < .05, one-tailed. Furthermore, performance was significantly related to the number of unique reasons students reported using, r(140) = .20, p < .05. Table 1. Reasons for Writing and Correlations With Test Performance Reason Highlighted key terms Marked question Drew figure/diagram Answer indication Elaboration Option elimination Data dump Correlation With a Test Performance Writers Who Reported Reasonb .22** .19* 20.3 53.2 .19* .11 .11 .10 .07 19.0 57.0 26.6 68.4 21.5 Note. Correlations represent the relationship between test performance and the number of times each strategy was used. a N = 140. bn = 79; given in percentages. *p < .05. **p < .01. Vol. 29, No. 2, 2002 Each reason and its relation with test performance appears in Table 1. Although the effects were relatively small, three reasons were significantly related to performance: highlighting key terms, marking questions that needed further examination, and drawing figures or diagrams. Additional reasons for writing were to indicate correct answers, elaborate on the questions, eliminate incorrect options, and write down additional information (“data dump”). Of those who chose not to write on the test, most stated that writing was not necessary (76%). Although students could do as they pleased with the test booklet, more than half of the nonwriters stated that they refrained from writing because instructors asked them not to write on exams in the past (52%). Students also stated that they chose not to write because they have become accustomed to it (14%), they never thought of doing it (11%), and they wanted to keep the exam neat (8%). Survey results indicated that students view writing as a valuable strategy and that they attach great importance to it. The majority of students reported that they write on multiple-choice exams at least occasionally in their other classes (57%), and they either somewhat agreed or strongly agreed that “writing on a test booklet is a valuable test-taking strategy” (72%). Furthermore, they either somewhat agreed or strongly agreed that “students should be allowed to write on a test booklet” (84%), “test performance improves when students are allowed to write on a test booklet” (62%), and “test performance suffers when students are not allowed to write on a test booklet” (47%). Discussion Results indicated that writing on test booklets was pervasive, perceived by students as important, and positively associated with test performance. Writing strategies appear to fall in one of two categories. The first category includes strategies that address the mechanics of completing a test, such as indicating correct answers and marking questions that need further examination. The second category includes strategies that help test takers mentally organize complex information, such as eliminating incorrect options, highlighting key terms, drawing figures or diagrams, and writing additional information that is relevant in answering test questions. Correlational analyses indicated that both categories were positively related (albeit weakly) to test performance. Our findings suggest that the desire to write on a test and the impact writing has on performance may be mediated by test content and test-taker characteristics. For example, 95% of students in an upper level educational psychology class chose to write on their test booklets (Kim & Goetz, 1993), whereas in our sample of introductory psychology students, only 56% chose to write. Although the two studies differed in terms of which specific writing strategies were related to test performance, both studies demonstrated that most students chose to write on test booklets, there was a common set of reasons for writing, and that writing was positively related to test performance. However, given the correlational nature of both studies, we cannot draw firm conclusions about causality. An experimental manipulation would be necessary to determine if writing on a test booklet directly influences test performance. 139 Because writing on a test booklet appears beneficial and most students attach great importance to it, instructors should carefully consider any testing formats that preclude its use. The most obvious implications involve computerized tests. Although many features make computerized tests attractive (e.g., automatic scoring), test takers may be unable to clerically and cognitively organize the test as they have become accustomed to doing with more traditional testing formats. As Internet-based courses become more popular and computerized testing becomes the norm, experimental research will be necessary to determine if testing formats that preclude writing may negatively influence performance and misrepresent the abilities of test takers who like to write on tests. References ACT. (1997). Getting into the ACT: Official guide to the ACT assessment (2nd ed.). San Diego, CA: Harcourt Brace. Benjamin, L. T., Jr., Cavell, T. A., & Shallenberger, W. R., III. (1984). Staying with initial answers on objective tests: Is it a myth? Teaching of Psychology, 11, 133–141. Bowering, E. R., & Wetmore, A. A. (1997). Success on multiple choice examinations: A model and workshop intervention. Canadian Journal of Counseling, 31, 294–304. College Entrance Examination Board. (2000). 10 real SATs (2nd ed.). New York: Author. Kim (Yoon), Y. H., & Goetz, E. T. (1993). Strategic processing of test questions: The test marking responses of college students. Learning and Individual Differences, 5, 211–218. Shatz, M. A. (1985). Students’ guessing strategies: Do they work? Psychological Reports, 57, 1167–1168. Shatz, M. A., & Best, J. B. (1987). Students’ reasons for changing answers on objective tests. Teaching of Psychology, 14, 241–242. Notes 1. We reported preliminary findings at the seventh annual American Psychological Society Institute on the Teaching of Psychology in Miami, FL, June 2000. 2. Send correspondence to Frank M. LoSchiavo, Department of Psychology, Ohio University–Zanesville, 1425 Newark Road, Zanesville, OH 43701; e-mail: loschiav@oak.cats.ohiou.edu. The Teaching of Psychology Course: Prevalence and Content William Buskist Rachel S. Tears Auburn University Stephen F. Davis Karen M. Rodrigue Emporia State University We present the results of a national survey on the prevalence and content of courses on the teaching of psychology for graduate teaching assistants (GTAs). Ninety-eight (67%) of the psychology departments we surveyed have a formal course on the teaching of psychology. These courses tend to be 1 academic term in length, in140 volve observation of GTAs’ teaching and feedback, and vary moderately in content. Our results prompt several suggestions for designing and implementing new courses on the teaching of psychology or revising extant ones. Large colleges and universities frequently employ graduate teaching assistants (GTAs) to assist faculty or to serve as teachers of record in introductory level courses. Many of these institutions provide training to their GTAs in an attempt to prepare them for the responsibilities inherent in teaching (e.g., Eckstein, Boice, & Chua-Yap, 1991; Lumsden, Grosslight, Loveland, & Williams, 1988; Mueller, Perlman, McCann, & McFadden, 1997). Such efforts are important because they introduce GTAs to the basic principles of effective teaching, which in turn, may enhance the learning experiences of the undergraduates they teach. In an extensive study of doctoral-granting schools that include GTA training programs, Meyers and Prieto (2000a) found that most GTAs at these schools received teacher training through the department, another unit of the university, or a combination of these. In some cases, though, GTA participation in the training program is voluntary, and in others GTAs may not participate in all training activities. They also found that the extent of training varied considerably across schools, ranging from none to extensive training. Some GTA training programs provide formal instruction on teaching (Benassi & Fernald, 1993; Grasha, 1978) and combine GTAs enrolling in a course or seminar on teaching with actual classroom teaching (Rickard, Prentice-Dunn, Rogers, Scogin, & Lyman, 1991). Unfortunately, little is known about the extent and nature of such courses. We conducted a national survey to address this important issue. Our study differed from Meyers and Prieto’s (2000a) research in that we focused exclusively on the teaching of psychology course. Method Participants and Procedure We mailed a cover letter, informed consent information, and questionnaire to chairs of psychology departments at 365 U.S. colleges and universities. We selected these departments based on the availability of teaching assistantships for graduate students as described in the American Psychological Association’s (APA; 1998) Graduate Study in Psychology. We asked department chairs to have the faculty member responsible for supervising or training GTAs complete and return the survey. We did not send a follow-up request to those departments who failed to respond. A total of 236 (65%) departments responded to the survey; 146 (62%) of these departments offered teacher training for their GTAs. Ninety-eight (67%) of these 146 departments reported offering a course on the teaching of psychology. These 98 departments represent 42% of the 236 departments that responded to the survey. This percentage is almost identical to the 43% reported by Meyers and Prieto (2000a). Twenty-nine respondents indicated that they did not employ GTAs. Teaching of Psychology Our sample of 98 respondents represented Doctoral/Research Universities–Extensive (68%), Doctoral/Research Universities–Intensive (16%), and Master’s Colleges and Universities I (15%). Geographically, this sample represented the following regions: West (29%), Midwest (27%), South (24%), and Northeast (20%). Although all of the graduate programs included in this study claimed that they employed GTAs (APA, 1998), only 87% reported such in response to our survey. The 109 respondents who returned surveys but indicated that they did not offer the teaching of psychology course represented Doctoral/Research Universities–Extensive (56%), Doctoral/Research Universities–Intensive (21%), and Master’s Colleges and Universities I (23%). Geographically, these 109 respondents represented the following regions: West (21%), Midwest (20%), South (23%), and Northeast (36%). Survey Instrument The survey contained items that inquired about basic aspects of the teaching of psychology course. Specifically, we queried respondents about the following: (a) whether respondents’ departments offer a course (i.e., a class or seminar); (b) the number of academic terms that GTAs enroll in the course; (c) the topics addressed in the course; (d) whether faculty leading the course observe GTAs teaching; (e) whether GTAs are videotaped during the observation; (f) whether GTAs receive written or verbal feedback or both from the faculty observer; (g) whether GTAs observe each other teaching; (h) whether the course requires GTAs to read a text, original research reports, or other writings regarding teaching; and (i) whether GTAs write a paper on the subject of teaching or more generally on the topic of higher education. We selected these items because they tapped a variety of elements that might be used for classroom instruction in the teaching of psychology course. Results and Discussion Rather than report data in terms of both raw numbers and percentages, we report only percentages because including both would be essentially redundant. Sixty-seven percent of the respondents reported their course is taught for a single academic term, 10% reported it lasts two terms, 3% last three terms, 14% are offered each term that GTAs are teaching, and 8% are offered optionally. Two respondents left this item blank. Table 1 shows the topics respondents reported covering in the course. Ninety-four percent of the respondents reported that their course involves observation of GTAs’ teaching and feedback. Nineteen percent observe GTAs giving microlectures during the course, 30% observe GTAs while teaching in the undergraduate classroom, and 45% do both. In addition, 48% of the respondents reported that they videotape GTAs during these observational sessions. Fifty-one percent of the respondents reported that the primary means of providing feedback to GTAs about their teaching involved both written and oral feedback, 31% said that they use verbal feedback alone, and 5% reported using only written feedback. Sixty-six percent of Vol. 29, No. 2, 2002 Table 1. Topics Covered in the Teaching of Psychology Class and the Percentage of Respondents Reporting Covering Those Topics Topics % Delivering lectures Asking/answering questions Classroom management skills Encouraging student participation Ethical situations in teaching Academic honesty Grading Holding office hours Leading class discussions Construction of test items Organization of class time Doing in-class activities Preparation of handouts The first day of class The teaching of critical thinking skills College/university academic policies Diversity issues Social skills in teaching Use of audio/visual equipment Use of electronic technologies 95 94 92 91 91 90 88 87 87 87 83 83 83 82 78 75 73 72 72 67 the respondents also noted that GTAs received feedback from their peers who observed them teaching. Sixty-one percent of the respondents said that the course involves reading a text or articles on teaching, although respondents did not identify specific texts or articles they used. However, only 18% reported that students write an essay on the topic of teaching or higher education in their course. We encourage the addition of written assignments as a means of stimulating GTA thinking about how to design, implement, and assess various teaching tools, such as demonstrations or problem-based learning activities. Useful topics for written assignments might include a statement of students’ philosophy of teaching, a journal of insightful teaching experiences, an interview with a local master teacher, or an essay centering on general issues in college and university teaching. Such writing assignments provide opportunities for reflecting and clarifying GTAs’ thinking about their teaching and provide opportunities for supervisors to include additional feedback, insight, and encouragement to their GTAs. Meyers and Prieto (2000b) recently suggested ways to incorporate active learning into the teaching of psychology course through written assignments (as well as other exercises; e.g., in-class activities, modeling, observation). Although two thirds of the departments that offer teacher training to GTAs had a course on the teaching of psychology, one third did not. Thus, a genuine need exists both to expand GTA training so that more GTAs receive appropriate instruction and to develop and refine courses on the teaching of psychology. Most teaching of psychology courses were one academic term in length, although some institutions offered such courses for two or three terms. Courses longer than one academic term provide opportunities to expose GTAs to a wider and deeper range of content than in a single term course. Extended coursework or supervision would also likely produce pedagogical benefits that cannot be gained over shorter train141 ing periods (e.g., sustained coaching and practice of the skills required for effective teaching). Although most respondents reported observation of GTA teaching and feedback, only about half reported using videotaping procedures. We encourage videotaping because it provides an accurate record of all aspects of GTA classroom performance. It is also a useful method to preserve GTAs’ classroom performances for future class discussion, feedback, and critique. We discovered that respondents addressed certain important topics less frequently than other important issues (e.g., how to teach critical-thinking skills, institutional academic policies, diversity issues, social skills essential to effective teaching). Failure to cover such issues is unadvisable for two reasons. First, as some authors have argued (e.g., Nummedal & Halpern, 1995), teaching critical-thinking skills to undergraduates is essential to their development as independent and effective problem solvers. Second, GTAs who are unaware of institutional policies, insensitive to diversity issues, or deficient in the social skills necessary to establish rapport risk misinforming or offending their students. We also urge more faculty who offer the teaching of psychology course to include content related to the use of electronic technologies. Such content may range from simple tasks, such as learning how to use an overhead projector, to more complex tasks, such as learning how to prepare and use a PowerPoint® presentation or developing Web-based resources. Learning to use electronic technologies may be useful in developing more effective, integrated, and thorough class presentations. The teaching of psychology course must include content focusing both on the basic techniques of sound teaching and “people skills” to maximize chances that it will fully prepare GTAs for their first classroom teaching experiences (cf. Mueller et al., 1997). Indeed, such content, combined with exposure to the teaching literature, constructive feedback, and writing assignments, seems likely to maximize chances that the new psychology professorate will develop the skills and self-efficacy necessary to meet the challenges that await them (Prieto & Meyers, 1999). References American Psychological Association. (1998). Graduate study in psychology 1998. Washington, DC: Author. Benassi, V. A., & Fernald, P. S. (1993). Preparing tomorrow’s psychologists for careers in academe. Teaching of Psychology, 20, 149–155. Eckstein, R., Boice, R., & Chua-Yap, E. (1991). Teaching assistant development. The Journal of Staff, Program, and Organizational Development, 9, 163–180. Grasha, A. F. (1978). The teaching of teaching: A seminar on college teaching. Teaching of Psychology, 5, 21–23. Lumsden, E. A., Grosslight, J. H., Loveland, E. H., & Williams, J. E. (1988). Preparation of graduate students as classroom teachers and supervisors in applied and research settings. Teaching of Psychology, 15, 5–9. Meyers, S. A., & Prieto, L. R. (2000a). Training in the teaching of psychology: What is done and examining the differences. Teaching of Psychology, 27, 258–261. 142 Meyers, S. A., & Prieto, L. R. (2000b). Using active learning to improve the training of psychology teaching assistants. Teaching of Psychology, 27, 283–284. Mueller, A., Perlman, B., McCann, L. I., & McFadden, S. H. (1997). A faculty perspective on teaching assistant training. Teaching of Psychology, 24, 167–171. Nummedal, S. G., & Halpern, D. F. (1995). Introduction: Making the case for “psychologists teach critical thinking.” Teaching of Psychology, 22, 4–5. Prieto, L. R., & Meyers, S. A. (1999). Effects of training and supervision on the self-efficacy of psychology graduate teaching assistants. Teaching of Psychology, 26, 264–266. Rickard, H. C., Prentice-Dunn, S., Rogers, R. W., Scogin, F. R., & Lyman, R. D. (1991). Teaching of psychology: A required course for all doctoral students. Teaching of Psychology, 18, 235–237. Notes 1. We thank members of the EDGE group for research on the teaching of psychology at Auburn University for their criticisms on an earlier version of this article. We also thank Randolph A. Smith and three anonymous reviewers for their helpful comments on an earlier version this article. 2. Send correspondence to William Buskist, Psychology Department, Appalachian State University, Boone, NC 28608; e-mail: buskistw@appstate.edu. Using Case Studies in Introductory Psychology Julie A. Leonard Kirsten L. Mitchell Steven A. Meyers Jacqueline D. Love Roosevelt University The case study is an active learning strategy that allows introductory psychology students to grapple with course content. To promote the successful development and use of case studies in introductory psychology courses, we present guidelines for implementing this technique as well as an illustration. Steps involved in using case studies include (a) determining the goals for the exercise, (b) selecting and narrowing a topic, (c) developing case material, (d) creating an inquiry component, and (e) assessing the effectiveness of the exercise. The case study is one active learning strategy that allows introductory psychology students to grapple with course content. In psychology and other disciplines, its use correlates with positive learning outcomes such as increased student engagement, problem-solving abilities, and decision-making skills (e.g., Fisher & Kuther, 1997; Sudzina, 1999; Vernon & Blake, 1993). Moreover, analyzing case studies promotes critical thinking, broadens students’ perspectives on psychological issues, and fosters independent learning (McDade, 1995). Case studies may be particularly well-suited for introductory psychology courses. Analyzing and discussing case mateTeaching of Psychology rial provides students with a change of pace in a lecture-dominated survey class. In a course that often focuses on breadth of knowledge (Sternberg, 1997), wrestling with case matter can facilitate in-depth understanding of material (McDade, 1995). Although several illustrations of case studies for psychology classes exist in the literature, few resources have consolidated suggestions about how to develop or implement them. As such, we integrate past work by presenting the steps involved in creating and using case studies for the introductory psychology course. We concisely describe the process of using case studies so that instructors can tailor such cases to meet their curricular needs; our focus is to build on previous literature by providing an organizational scheme toward this end. We recommend that psychology instructors determine their goals for the case study exercise before selecting a topic. Case studies can provide students with opportunities to evaluate data, identify important concepts, develop hypotheses, and create or defend arguments (McKeachie, 1999). These objectives contrast with goals best accomplished by lectures, such as acquiring factual information (Davis, 1993; McKeachie, 1999). Thus, the particular student competencies that instructors want to develop will shape the instructions for the activity as well as the case details. After reflecting on their instructional objectives, psychology faculty must select a topic area. Instructors often use case studies to teach ethics (e.g., Costanzo & Handelsman, 1998) and abnormal psychology (e.g., Perkins, 1991). However, faculty can incorporate case studies when teaching many other subjects within introductory psychology, such as neuropsychology (Morris, 1991), child development (McManus, 1986a), or research methods (McBurney, 1995). More important, instructors need to narrow the topic area to appropriately address it in the case study. Student feedback may provide instructors with valuable information about potential topics. Which areas typically prompt many student questions? What generates the most discussion or debate in class? Topics for case studies often address controversies (e.g., Ford, Grossman, & Jordan, 1997) and generally do not have one correct answer (McBurney, 1995). To exemplify, our introductory psychology students often have difficulty objectively examining the topic of prejudice within the social psychology unit. Our goals for the case study exercise described subsequently include allowing students to (a) apply the course concepts of stereotypes, prejudice, and attribution to case material; (b) examine the interrelation between attitudes and behaviors; and (c) examine a societal implication of prejudiced attitudes. The focus of our case study is sufficiently broad to permit simultaneous analysis of several related topics (e.g., attribution, attitude formation, attitude change). However, it is appropriately narrow to permit in-depth exploration (i.e., we exclude other social psychological topics; e.g., social influence or cognitive dissonance). Instructors need to construct the case study itself. Successful cases are accounts of realistic situations (Hoover, 1980). They can be actual or fictitious scenarios, vary in length, and focus on one or more problems. Instructors can develop cases themselves; find case material in scholarly publications, newspaper articles, films, or literature (e.g., Chrisler, 1990; Logan, 1988); use those contained in ancillary materials for introductory psychology textbooks (e.g., Bolt, 1996); or encourage stuVol. 29, No. 2, 2002 dents to write their own cases (e.g., McManus, 1986b; Ortman, 1993). Cases can also include actual or simulated data to supplement narrative text (e.g., Witte, 1998). Instructors should use a well-written story that contains important details so that students can engage substantively with the problem. It should be multilayered, with both obvious and subtle parts. The case study should be challenging and allow for student questioning. It should not, however, be overly complex such that students become confused when trying to follow the story line. Excessive intricacy detracts from the exercise and results in students feeling overwhelmed. Instructors need to be cognizant of students’ abilities and construct the case study to match this level. In our brief illustration, we present students with the following case: You are a high school instructor who has been teaching for the past five years at a predominantly White, affluent high school. Because someone with more seniority has bumped you, you have been transferred to another school. It is located in the inner city, and virtually all its students are African Americans; it is on academic probation due to low achievement scores on standardized tests. You fear that some of your current students may be gang members because of the way that they are dressed. Frankly, you are concerned about your safety. Not only are you scared of being challenged by your students, but also of traveling to the neighborhood in which you now work. You have requested a transfer to a school in a “better neighborhood,” but the administration has told you that reassignment is unlikely before the school year is over. Instructors need to add an inquiry section to allow students to apply their knowledge to the case. These questions should not only assess students’ ability to recall, comprehend, and apply relevant concepts, but questions should also focus on developing critical thinking skills such as analysis, synthesis, and evaluation. Moreover, questions should follow a logical sequence and should appear in an order of increasing difficulty. An appropriate number of questions proportional to the amount of time allocated for the exercise should follow the case study. Students may complete these questions individually, in small groups, or through class-wide discussion. Faculty may further assign written analysis of the case outside of class. To illustrate the points described previously, we pose the following questions to students in our sample case study. Using Bloom’s (1956) taxonomy of educational objectives, we include a parenthetical note after each question to clarify what skill that question fosters. • What are the stereotypes and prejudices that the teacher in this example holds? (comprehension) • According to attribution theory, how might a person with these ideas and feelings categorize the behaviors of the students in the school? (comprehension and application) • How might this teacher’s attitudes affect his or her behavior in this school setting? (application) • What are some ways in which this teacher’s stereotypes and prejudices could be changed? (application) 143 • Should teachers with subtle prejudices be allowed to work in predominantly minority schools? Why or why not? (synthesis and evaluation) The last step involves assessing the effectiveness of the case study. Instructors can periodically use classroom assessment techniques (Angelo & Cross, 1993) to gauge students’ reactions to these exercises or to determine whether using case studies accomplishes their instructional objectives for the class. We further recommend that instructors provide students with feedback and assign grades at the end of case study exercises. This process (a) communicates to students that case analyses are an important part of class, (b) allows students to progressively refine their inductive and deductive abilities, and (c) increases students’ motivation. Our students provided us with written feedback to open-ended questions about our use of case studies. They indicated that case studies help them comprehend abstract course material by providing concrete illustrations and opportunities to verify their understanding of core concepts. Moreover, our students emphasized that analyzing case studies increases their involvement, interest, and attention in introductory psychology. Finally, our students frequently underscored how case studies are both a welcome contrast and complement to lecture and discussion methods. More specifically, they enjoyed applying and integrating course material using real-life situations. However, our students also believed that lectures help them understand important concepts and better prepare them to examine case material. They similarly reported that class-wide discussions after they analyze cases are useful to verify the accuracy and thoroughness of their conclusions. In sum, the use of case studies not only allows instructors to teach material in a novel manner, but it also can provide students with a meaningful and enjoyable learning experience. McBurney, D. H. (1995). The problem method of teaching research methods. Teaching of Psychology, 22, 36–38. McDade, S. A. (1995). Case study pedagogy to advance critical thinking. Teaching of Psychology, 22, 9–10. McKeachie, W. J. (1999). Teaching tips: Strategies, research, and theory for college and university teachers (10th ed.). Boston: Houghton Mifflin. McManus, J. L. (1986a). “Live” case study/journal record in adolescent psychology. Teaching of Psychology, 13, 70–74. McManus, J. L. (1986b). Student composed case study in adolescent psychology. Teaching of Psychology, 13, 92–93. Morris, E. J. (1991). Classroom demonstration of behavioral effects of the split-brain operation. Teaching of Psychology, 18, 226–228. Ortman, P. E. (1993). A feminist approach to teaching learning theory with educational applications. Teaching of Psychology, 20, 38–40. Perkins, D. V. (1991). A case-study assignment to teach theoretical perspectives in abnormal psychology. Teaching of Psychology, 18, 97–99. Sternberg, R. J. (Ed.). (1997). Teaching introductory psychology: Survival tips from the experts. Washington, DC: American Psychological Association. Sudzina, M. R. (Ed.). (1999). Case study applications for teacher education. Boston: Allyn & Bacon. Vernon, D. T. A., & Blake, R. L. (1993). Does problem-based learning work? A meta-analysis of evaluative research. Academic Medicine, 68, 550–563. Witte, R. H. (1998). Use of an interactive case study to examine school learning problems. Teaching of Psychology, 25, 224–226. Notes 1. An earlier version of this article was presented at the annual meeting of the Midwestern Psychological Association, Chicago, IL, May 2000. 2. Send correspondence to Steven A. Meyers, School of Psychology, Roosevelt University, 430 South Michigan Avenue, Chicago, IL 60605; e-mail: smeyers@roosevelt.edu. References Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco: Jossey-Bass. Bloom, B. S. (Ed.). (1956). Taxonomy of educational objectives (Vol. 1). New York: McKay. Bolt, M. (1996). Instructor’s resources to accompany Myers Exploring Psychology. New York: Worth. Chrisler, J. C. (1990). Novels as case-study materials for psychology students. Teaching of Psychology, 17, 55–57. Costanzo, M., & Handelsman, M. M. (1998). Teaching aspiring professors to be ethical teachers: Doing justice to the case study method. Teaching of Psychology, 25, 97–102. Davis, B. G. (1993). Tools for teaching. San Francisco: Jossey-Bass. Fisher, C. B., & Kuther, T. L. (1997). Integrating research ethics into the introductory psychology course curriculum. Teaching of Psychology, 24, 172–175. Ford, T. E., Grossman, R. W., & Jordan, E. A. (1997). Teaching about unintentional racism in introductory psychology. Teaching of Psychology, 24, 186–188. Hoover, K. A. (1980). Analyzing reality: The case method. In K. A. Hoover (Ed.), College teaching today: A handbook for post-secondary instruction (pp. 199–223). Boston: Allyn & Bacon. Logan, R. D. (1988). Using a film as a personality case study. Teaching of Psychology, 15, 103–104. 144 In Search of Introductory Psychology’s Classic Core Vocabulary Richard A. Griggs Montserrat C. Mitchell University of Florida Given the finding that current introductory psychology textbooks do not share a substantial common core vocabulary, we examined 2 related questions. First, was there a substantial core a half century ago before introductory texts became so lengthy and encyclopedic? Second, is there a classic core vocabulary (terms in the core vocabularies of both contemporary texts and those from the 1950s)? We did not find a substantial common core for the 1950s texts and conclude that it is highly unlikely one has ever existed. However, a classic core vocabulary of over 100 terms does exist, and we discuss the importance of covering these terms in the introductory course. In the last decade, three studies (Landrum, 1993; Quereshi, 1993; Zechmeister & Zechmeister, 2000) atTeaching of Psychology tempted to identify the core vocabulary (the common language) for introductory psychology. Methodology varied somewhat across the three studies in that Landrum did a page-by-page text analysis for “important” terms, Quereshi analyzed text indexes, and Zechmeister and Zechmeister examined text glossaries. Regardless of these methodological differences, the general finding of all three studies was the same: Only a small percentage of the key terms were common to many texts. For example, Zechmeister and Zechmeister found that only 64 of 2,505 terms (< 3%) were common to all 10 texts in their sample, and approximately half (49%) appeared in only 1 text glossary. Landrum’s conclusion summarized this general finding quite well: “From these data, it appears that Introductory Psychology textbooks are much more different from one another than similar” (p. 663). This finding does seem at odds with Matarazzo’s (1987) claim that the core content of introductory psychology textbooks has remained the same since 1890, but it is totally consistent with Griggs and Marek’s (2001) recent analysis showing that today’s introductory psychology texts are more dissimilar than similar. At the same time, Griggs and Marek’s results are consistent with Matarazzo’s claim because they found introductory textbooks to be highly similar with respect to their chapter topics and organization, which was the dimension Matarazzo used in his analysis. Griggs and Marek, however, discovered that this similarity disappeared as soon as the analysis moved past such global textbook dimensions. Historical content analyses of introductory psychology textbooks (Griggs & Jackson, 1996; Webb, 1991; Weiten & Wight, 1992), moreover, have shown substantial changes in percentage of text coverage for the various chapter topics and a consistent increase in text length (in terms of text pages) over the past half century. According to Weiten and Wight, these changes have a natural explanation—“psychologists have been expanding their domain of inquiry and increasing their production of research” (p. 481). Thus, introductory textbook authors have been increasingly forced to accommodate both more domains of research and increasing amounts of research into a relatively stable set of standard chapter topics. Given the large and ever-increasing research base within each of the various areas, textbook authors have differed greatly in how they have chosen to cover each of these domains. Such variance has led to introductory textbooks being more dissimilar than similar. Therefore, it is a reasonable hypothesis that before the expansion of these textbooks in the past half century, a more substantial common core vocabulary may have existed. In this study we tested this hypothesis by comparing the glossaries of a sample of 1950s introductory texts. If a substantial common core did not exist for these texts, one has probably never existed. We also wanted to determine if a classic core vocabulary exists (terms that are in the core vocabularies of both contemporary texts and those from the 1950s). Identifying such a vocabulary is important to psychology teachers because these terms would comprise the “core” of the core vocabulary for introductory students. These terms will have stood the test of time and remained in introductory textbooks for almost a half century. Because introductory teachers have to make difficult decisions on what to present due to limited course time, we hope to aid them in this task via the identification of Vol. 29, No. 2, 2002 this classic core vocabulary. Introductory teachers should definitely include these terms in their courses. Method Sample of Textbooks To choose our sample, we used Weiten and Wight’s (1992) text samples as a guide, but we also considered whether a text had a glossary. We examined glossaries in our analysis because Zechmeister and Zechmeister (2000) analyzed glossaries, and we used their data for our classic term identification procedure. In addition, similar to Zechmeister and Zechmeister, we believe that glossaries are preferable to indexes because a glossary identifies what the textbook author considered essential to the textbook’s message. We chose three 1950s textbooks used in the Weiten and Wight (1992) study that had text glossaries: Hilgard (1953), Krech and Crutchfield (1958), and Morgan (1956). All were first editions and had been selected by Weiten and Wight following their nominations as leading texts for their time period by a sample of fellows of Division 26 (History of Psychology) of the American Psychological Association and surviving past presidents of Division 2 (Society for the Teaching of Psychology). Although our sample is small, we are confident, as were Weiten and Wight about their small text samples, that it constitutes a reasonably representative sample of the leading texts from that time period. Identification of Core and Classic Core Vocabularies We compared the glossaries two at a time, thereby identifying unique terms, terms in any two of the three texts, and terms in all three texts. As is well documented in previous core vocabulary studies, we found variability in labeling the same concept (e.g., difference threshold and differential threshold). To decide whether two terms were the same, we compared the two definitions and worked together to make the judgment. We also checked definitions to ensure that terms that were literally the same described the same concept. The average number of glossary items in the three textbooks was 606.3 (range = 531 to 676). We used the criterion that a term had to be in two or three of the three glossaries to be included in the core vocabulary list. Once we compiled this list, we compared it to the list of terms included in 7 or more of the 10 textbook glossaries in Zechmeister and Zechmeister’s (2000) textbook sample. We obtained these data from Jeanne Zechmeister (J. S. Zechmeister, personal communication, October, 2000). Any term appearing on both lists qualified as a classic core vocabulary term. Results and Discussion We found 1,819 glossary terms in the 3 textbooks before we eliminated duplications. The mean glossary size of 606.3 was less than that for the 10 textbooks in the sample used by 145 Table 1. Introduction Behaviorism Functionalism Introspection Psychology Research methods/statistics Correlation coefficient Dependent variable Independent variable Mean Median Mode Naturalistic observation Standard deviation Physiological psychology Autonomic nervous system Central nervous system Cerebral cortex Endocrine glandular system Frontal lobe Homeostasis Hormones Hypothalamus Midbrain Motor cortex Motor neuron Occipital lobe Parasympathetic nervous system Pituitary gland Sensory neuron Sympathetic nervous system Temporal lobe Thalamus Developmental psychology Chromosome Embryo Fetus Fraternal twins Gene Identical twins Puberty Sensation/perception Absolute threshold Accommodation Basilar membrane Cochlea Cone Difference threshold Fovea Gestalt psychology Hue Optic nerve Perception Pitch Psychophysics Retina Rod Saturation Consciousness Hypnosis Classic Core Vocabulary Terms Learning Classical conditioning Conditioned response Conditioned stimulus Discrimination Extinction Generalization Instrumental conditioning Latent learning Law of effect Learning Partial reinforcement Punishment Reflex Reinforcement Spontaneous recovery Unconditioned response Memory Retroactive interference Thought/language Concept Image Insighta Intelligence Factor analysis IQ Mental age Reliability Validity Motivation/emotion Drive Emotion Motivation Personality Defense mechanism Ego Id Personality Projective test Repression Superego Temperament Trait Disorders Anxiety Anxiety disorder Hallucination Phobia Schizophrenic disorder Therapies Client-centered therapy Free association Group therapy Insighta Psychoanalysis Psychotherapy Resistance Transference Social Psychology Aggression Attitude Norm Prejudice Social psychology Stereotype Note. These terms appeared in at least 67% of the glossaries in this study and at least 70% of the glossaries in Zechmeister and Zechmeister (2000). The terms are grouped in accordance with the chapter topics and organization of contemporary introductory textbooks. a This term appears under both thought/language and therapies because two definitions (one relevant to each topic) were always given for this term. Thus, we considered it as two different terms in this table. Zechmeister and Zechmeister (2000; M = 691.1). However, if you extrapolate terms per chapter, this difference of about 85 terms is not much given the number of chapters in the texts (M = 24.3 for our sample and M = 17.9 for the Zechmeister & Zechmeister, 2000, sample). Therefore, the difference amounts to only 3 to 5 terms per chapter regardless of which mean is used for the calculation. After eliminating duplicate terms, 1,287 terms remained. As observed by Zechmeister and Zechmeister (2000), most were unique to one text. There were 1,048 (75.7%) unique terms, 246 (17.6%) terms appearing in two glossaries, and only 93 terms (6.9%) appearing in all three glossaries. Thus, as with contemporary introductory textbooks, no truly substantial common core vocabulary existed in these 1950s textbooks. Given that we chose texts from the 1950s to maximize the likelihood of finding a substantial common core, our failure to find one indicates that such a core has likely never existed. We did, however, find 109 classic core vocabulary terms. These terms appear in Table 1. To facilitate their use by introductory psychology teachers, we grouped them in accordance with the chapter topics and organization of contemporary introductory psychology textbooks. These terms comprise about one third of the core vocabularies observed for both the 1950s and 1990s textbooks. We next considered how these classic terms were distributed across the major topic areas within introductory psychology. As we expected, few were in areas that have grown and gained prominence since the 1950s, especially cognitive psychology. Only 4 (3.7%) of the 109 terms are cognitive in nature. Most of the terms come from older research areas—sensation/perception (16; 14.7%), learning (16; 14.7%), and physiological psychology (18; 16.5%). This latter percentage would be even greater if we had included the 7 developmental psychology terms, which all concerned genetics or physical development, in the physiological category. It is also interesting to note that both the personality and therapies topics were dominated by Freudian or psychoanalytic terms (5 out of 9 and 5 out of 8, respectively). In conclusion, it appears that a substantial core vocabulary in introductory textbooks does not presently exist and probably has never existed. However, a classic core vocabulary of over 100 terms does exist. As Zechmeister and Zechmeister (2000) pointed out, the American Psychological Association’s Committee on Undergraduate Education (McGovern, Furumoto, Halpern, Kimble, & McKeachie, 1991) recommended that psychology teachers use the principle that “less is more” in their courses. In response, Zechmeister and Zechmeister (2000) asked with respect to the introductory course, “If less is more, then what should that less be?” (p. 7). Although we agree with Zechmeister and Zechmeister that exactly what that “less” should be is unclear, we believe that the classic core vocabulary should definitely be a part of it. References Griggs, R. A., & Jackson, S. L. (1996). Forty years of introductory psychology: An analysis of the first 10 editions of Hilgard et al.’s textbook. Teaching of Psychology, 23, 144–150. Vol. 29, No. 2, 2002 Griggs, R. A., & Marek, P. (2001). Similarity of introductory psychology textbooks: Reality or illusion? Teaching of Psychology, 28, 254–256. Hilgard, E. R. (1953). Introduction to psychology. New York: Harcourt, Brace. Krech, D., & Crutchfield, R. A. (1958). Elements of psychology. New York: Knopf. Landrum, R. E. (1993). Identifying core concepts in introductory psychology. Psychological Reports, 72, 659–666. Matarazzo, J. D. (1987). There is only one psychology, no specialties, but many applications. American Psychologist, 42, 893–903. McGovern, T. V., Furumoto, L., Halpern, D. F., Kimble, G. A., & McKeachie, W. J. (1991). Liberal education, study in depth, and the arts and sciences major—Psychology. American Psychologist, 46, 598–605. Morgan, C. T. (1956). Introduction to psychology. New York: McGraw-Hill. Quereshi, M. Y. (1993). The contents of introductory psychology textbooks: A follow-up. Teaching of Psychology, 20, 218–222. Webb, W. B. (1991). History from our textbooks: Boring, Langfeld, and Weld’s introductory texts (1935–1948+). Teaching of Psychology, 18, 33–35. Weiten, W., & Wight, R. D. (1992). Portraits of a discipline: An examination of introductory psychology textbooks in America. In A. E. Puente, J. R. Matthews, & C. L. Brewer (Eds.), Teaching psychology in America: A history (pp. 453–504). Washington, DC: American Psychological Association. Zechmeister, J. S., & Zechmeister, E. B. (2000). Introductory textbooks and psychology’s core concepts. Teaching of Psychology, 27, 6–11. Notes 1. We thank Jeanne Zechmeister for providing us with the core concept data from Zechmeister and Zechmeister (2000), three anonymous reviewers, and Randolph Smith for valuable comments on an earlier version of this article. 2. Send correspondence to Richard A. Griggs, Department of Psychology, PO Box 112250, University of Florida, Gainesville, FL 32611; e-mail: rgriggs@ufl.edu. Classroom Demonstrations of Auditory Perception LaDawn Haws Department of Mathematics and Statistics California State University, Chico Brian J. Oppy California State University, Chico Many faculty who teach a psychology class with a sensation and perception component present a variety of demonstrations of visual perception, but few about audition. Demonstrations using inexpensive materials can illustrate some of the basic concepts related to auditory perception. In this article we describe simple and inexpensive demonstrations of sound localization, wave cancellation, frequency/pitch variation, and the influence of different media on sound propagation. 147 Faculty who teach courses in sensation and perception and classes in introductory psychology commonly present demonstrations of visual perception phenomena (Goldstein, 1999). However, they seldom present demonstrations of auditory perceptual phenomena. In this article we present several simple inexpensive activities involving audition that give students concrete experiences to complement the abstract concepts presented in texts. Localization Demonstration 1 Each pair of students needs a 30-in. length of 3/8-in. outer diameter flexible plastic hose, available from any hardware store. The exact length is not important, but the midpoint should be marked clearly and accurately. The Listener holds the hose with one end at each ear, eyes closed. The Tapper taps the hose gently with a pencil, and the Listener must determine if the tap was closer to the left ear or the right ear. The Tapper should tap at different distances, ranging from 12 in. to 1 in. away from the center mark. The Tapper must be careful to not tap too vigorously, which would give the Listener a tactile directional clue. After several trials, the Listener and Tapper exchange roles. This demonstration gives students an appreciation of the accuracy with which people can determine the location of a sound source. Most people are able to distinguish the tapping direction even when the tap is only 1 in. away from the center mark of the hose. Listeners locate sound sources largely due to the interaural time difference (Goldstein, 1999), defined as the minute differences in time for the sound waves to reach one ear before the other: The sound source is closer to the ear that received the input first. Interaural intensity differences also provide a significant cue to location: The stimulus is slightly more intense (louder) for the closer ear. Localization Demonstration 2 This demonstration is done in two parts and requires a “clicker” (the type used by children on Halloween works well) and a 12-in. length of 3-in. diameter plastic pipe or a heavy cardboard tube. For Part 1, the Listener sits facing the class, eyes closed. The Clicker stands behind the Listener and makes a “click” in the same plane as the listener, parallel to the front wall. The Listener must point to the direction of the click. The Clicker clicks 2 to 3 ft from the Listener, in several positions including left, right, and directly overhead, but not in front or back. The Listener will be very accurate in detecting the direction of the click. For Part 2, the Listener holds the pipe firmly up to one ear and the Clicker repeats the demonstration. The Listener is likely to perceive an overhead click to be closer to the ear without the pipe because the sound waves must travel farther to reach the “pipe ear” due to the interaural time difference. This demonstration illustrates two basic points. First, Listeners are quite accurate in locating a sound stimulus in three-dimensional space. Second, sound localization is very “bottom up”; even though Listeners are aware of the tube, they are quite confident of their impression of direction—a misplaced confidence when the tube is in place. 148 Frequency and Pitch The Demonstrator pours water into a 4-ft length of 1-in. diameter metal pipe with the bottom end plugged. As the water fills the pipe, the vibrations generated by the moving water produce an audible roar. The pitch of the roar will increase as the pipe fills. Once the pipe is full, the Demonstrator empties the pipe and repeats the demonstration, but this time while the Demonstrator pours the water, an assistant taps the lower part of the pipe continuously. The pitch of the tapped pipe will get lower as the pipe fills. This demonstration works well even in a large auditorium—simply hold a microphone near the top opening of the pipe. The two versions of this demonstration produce opposite effects because different media are vibrating. In both cases, a shorter span of media produces higher frequency sound waves. In the first demonstration, the vibrating column of air becomes shorter as the pipe fills, and the increasingly shorter column produces an increasingly higher frequency (and higher pitch). In the second demonstration, the tapping causes the water to vibrate. The column of water grows longer as the pipe fills, producing an increasingly lower pitched sound. Of course there are other demonstrations of the relationship between length and pitch (e.g., wind chimes and xylophones) but this experiment is interesting because there are two columns changing length and pitch continuously in opposite directions. Sound Cancellation When two waves of the same frequency and same phase combine, the resulting wave retains the common frequency, but the amplitude of the resultant wave is the sum of the amplitudes of the combining waves, as in Figure 1. People perceive the increased amplitude of sound waves as a louder sound. Conversely, if two waves of the same frequency but half a wavelength out of phase combine, the two waves annihilate each other—the amplitude is zero, as in Figure 2. In theory, two sound waves of the same pitch but half a wavelength out of phase should combine to produce silence. In practice, it is difficult to achieve total cancellation, but this demonstration makes it possible to verify that cancellation does indeed occur. Figure 1. phase. The result of adding two waves of the same frequency, in Teaching of Psychology one string around each index finger, then sticks those fingers into his or her ears, letting the hanger dangle loosely. The Tapper strikes the hanger with a pen. The Listener will hear what sounds like church bells ringing in his or her ears, but a nearby observer hears nothing more than a dull thud. Sound waves are affected by the different media through which they travel. In this case, sound waves traveling through air dissipate much faster than waves traveling through bones and string, so the Listener receives virtually all of the tones and overtones that are produced when the hanger is struck by the pen, whereas a nearby observer receives only the dissipated tones. Lowery (1997) described this and other activities. Figure 2. The result of adding two waves of the same frequency, out of phase. The equipment required for this demonstration is a stethoscope that has been dismantled and refitted with Y connectors and rubber hoses as in Figure 3; a 440-A tuning fork is also required. The upper hose on the modified stethoscope is 1.25-ft long between the Y connectors, and the bottom hose is 2.5 ft. The lengths of the hoses are related to the frequency of the tuning fork and should be measured carefully. The listener dons the modified stethoscope, putting the earpieces into both ears. The Tapper raps a 440-A tuning fork and holds it close to (but not touching) the open end of the hose. When the sound waves reach the first Y connector, roughly half will enter the top hose and half the bottom hose (remember, the bottom hose is one full wave length, and the top hose is half a wavelength). The sound waves traveling along the bottom hose will reach the second Y connector after completing one full cycle—the waves will be starting the positive arch as they enter the Y. The sound waves traveling along the top hose will reach the second Y connector after completing half a cycle—the waves will be starting the negative arch as they enter the Y. The two kinds of waves entering the Y look just like the waves in Figure 2. The Listener hears a very soft sound because many of the waves cancel as they meet at the Y. If the Listener pinches one of the tubes, the volume of the sound will increase! Most listeners will be surprised at the counterintuitive increase in volume that results from cutting off one of the hoses. When both hoses are open, sound cancellation occurs. When the Listener pinches one of the hoses, the sound waves travel directly down the open hose with no cancellation—that is why the sound is louder. The phenomenon of cancellation has a practical application: A jackhammer operator wearing a set of earphones that produce waves that cancel the jackhammer waves can greatly reduce the damage done to his or her inner ears. A physics text (Halliday, Resnick, & Walker, 1996) gave a rigorous exposition of the physics behind sound phenomena; a discussion aimed at readers with less technical background is also available (Hewitt, 1998). Conclusions We have used these simple activities with learners from different disciplines and at different age levels. Although we have done no formal study to evaluate the pedagogical effectiveness Figure 3. Apparatus for sound cancellation demonstration. Sound Traveling Through Different Media The apparatus for this demonstration is a metal clothes hanger suspended from two 20-in. lengths of string (exact length is not important) as in Figure 4. The Listener wraps Vol. 29, No. 2, 2002 Figure 4. Apparatus for sound waves traveling through different media demonstration. 149 of these activities, students’ efforts on homework problems indicate that learning has indeed occurred—students use language correctly and explain concepts in their own words. Perhaps most important, in course evaluations, students comment on how much they enjoyed the “toys” and that they had learned so much because of the hands-on approach. students have been briefly introduced to Piaget’s work. The goal of this activity is to introduce students to these concepts and to encourage a dialogue among university instructors and students that explores appropriate instructional techniques. Description References Goldstein, E. B. (1999). Sensation and perception (5th ed.). Pacific Grove, CA: Brooks/Cole. Halliday, D., Resnick, R., & Walker, J. (1996). Fundamentals of physics: Extended. New York: Wiley. Hewitt, P. G. (1998). Conceptual physics (8th ed.). Glenview, IL: Prentice Hall. Lowery, L. (1997). The everyday science sourcebook. Palo Alto, CA: Dale Seymour. Note Send correspondence to LaDawn Haws, Department of Mathematics and Statistics, California State University, Chico, CA 95929–0525. E-mail: lhaws@csuchico.edu. Schema Theory: A New Twist Using Duplo™ Models Joe D. Nichols School of Education Indiana University–Purdue University Fort Wayne I describe a classroom demonstration for teaching and discussing Piaget’s concepts of schemata, assimilation, accommodation, and equilibration. One student (“teacher”) describes an asymmetrical construction built from children’s Duplo™ blocks such that another student (“learner”) can build an identical model. The teacher and learner cannot see each other, and the learner may not speak. Following the classroom demonstration, students discuss the Piagetian concepts in relation to the teacher’s and learner’s behaviors. Piaget’s classic theoretical model describing how people gather, organize, and adapt to new information from the environment is a standard in most current educational or developmental psychology courses and texts. Piaget’s (1937/1954) theory suggested that some activities are quite simple for adults and more complicated for children depending on their current developmental level. As the thinking process slowly changes from birth to maturity, Piaget (1983) identified four factors that serve to influence cognitive development: biological maturation, activity, and social and equilibration experiences. Piaget’s (1937/1954, 1983) concepts of schemata, assimilation, accommodation, and equilibration can be challenging ideas for undergraduate students. The following classroom demonstration designed for undergraduates in educational or developmental psychology classes occurs after 150 Before class, I construct an asymmetrical structure using 25 to 30 components made of children’s Duplo™ blocks. These blocks are in various shapes and colors of hard plastic, larger than LEGOs™, to facilitate every class member’s view of the demonstration. I bring the construction in a closed box and bring in a separate bag of 25 to 30 identical components unassembled. Two student volunteers sit back to back at desks in front of the class. One student (designated the teacher) receives the preassembled construction and the other student (designated the learner) receives the matching unassembled pieces. The task is for the teacher to describe the model and give instructions so that the learner can build a matching model. The learner is not allowed to speak. The teacher must use verbal instructions only to guide the construction of the model. Because the students are seated back to back, the learner cannot see and the teacher cannot see how well the learner is following the instructions. At its completion, the learner’s structure may resemble the model but the two can be quite different. After this demonstration, students analyze how Piaget’s theory relates to this situation. Observations In the role of the teacher, few students describe the model as a whole before beginning their instructions; thus denying the learner an opportunity to conceptualize the model. The lack of an advanced organizer or visual schema for the final model forces the learner into a state of disequilibrium from the onset. The learner must accommodate all incoming directions as new, rather than assimilating the teacher’s input with an existing knowledge structure. Few teachers describe the Duplo components by size, shape, or color because they assume that the learner has previous experience or schemata in place that will allow for the easy assimilation of new information. In addition, many learners assume that the model is a symmetrical structure unless the teacher tells them that the finished product will be an asymmetrical structure with unequal leg or tower sizes. Most students start with the simplest portion of the structure, such as a tower, because it appears to be the easiest component to construct and then describe adjoining extensions of the tower, therefore moving from the simplest form to the more complex. Others will describe the model in layers as in tiers or floors, again demonstrating their desire to move from simple component pieces to a more complete complex structure. Some teachers describe each of the pieces (e.g., a blue block with six bumps), and others begin by taking inventory of their components and turning the structure 360º to determine the best viewing angle before beginning their instruction. Teaching of Psychology In the description of the model, the teacher’s terminology is critical. In one instance, a student used words such as “parallel,” “perpendicular,” and “flush against” to describe the relationship of two Duplo pieces. My students observed the learner pause to reflect on the difference in meaning in these concepts. They described this as a moment of disequilibration in which the learner had to accommodate rather than efficiently assimilate information into existing schemata. As a result, the learner fell behind the teacher’s pace of instructions and became disoriented. An example of assimilation leading to a faulty conclusion occurred on one occasion when one Duplo piece had an “eyeball” imprinted onto the side of the block and the teacher described a “crescent piece” (similar to a half moon Duplo block) in her verbal directions. The learner’s familiarity with Duplo led him to look for a crescent shaped piece that is often found in Duplo sets rather than a crescent shaped decal imprinted on the side of a square block. Educational psychology students typically focus their observations on the learner rather than teacher, but as future educators, they should also attend to a teacher’s descriptions; instructions; use of advanced organizers; and the techniques to aid assimilation, accommodation, and equilibration. This demonstration also relates to Gestalt theory’s central concept that the whole is greater than the sum of its parts. If learners do not have a complete visual or advance representation of the model, then their ability to gain information may be limited by descriptions focused on the component parts. By not allowing the learner to ask any questions, this demonstration actually sets the stage for cognitive disequilibrium to occur. Learners can become frustrated that the teacher’s directions are unclear or that vocabulary is uncommon. They can become bored by the slow pace of directions or agitated if instructions are too fast. When learners assimilate information incorrectly, the error affects the structure exponentially, as one piece of information is crucial for each component that follows. Effectiveness Students enjoy the demonstration and make several appropriate connections to the teaching environment as a whole; to Piaget’s classic theories; and to concepts such as advanced organizers, Gestalt theory, verbal dialogue, and pace of instruction. Students also experience a practice they should use in the future when they become teachers: converting theoretical concepts into real-world applications to promote intellectual engagement and motivation for learning. The real key is that students come to realize that instruction and learning are interactive activities involving both the teacher and learner. lieved was necessary for learning to occur. Others (Burch, 1999; Grant, 1995; Hill, 2000) also recognized the importance of reflective practice as a means to understanding not only behavior but also cognitive development in the classroom. This self-reflective thought encourages a deeper self-understanding, which is an impetus to self-confidence, both of which are important traits for novice and seasoned teachers to understand and develop throughout their professional careers (Burch, 1999). In conclusion, the classroom activity described in this article allows students to experience disequilibrium in a safe atmosphere and to explore Piaget’s concepts of schemata, assimilation, and accommodation in a practical setting. At the same time, it encourages the discussion of several additional important concepts that preservice and even current teachers should experience. As university instructors, we need to continually strive to develop classroom demonstrations and models that effectively assist our students to bridge the gap between theory and practice. References Burch, C. (1999). When students (who are preservice teachers) don’t want to engage. Journal of Teacher Education, 50, 165–172. Grant, G. (1995). Interpreting text as pedagogy and pedagogy as text. Teachers and Training: Theory and Practice, 1, 87–100. Hill, L. (2000). What does it take to change minds? Intellectual development of preservice teachers. Journal of Teacher Education, 51, 50–62. Piaget, J. (1954). The construction of reality in the child (M. Cook, Trans.). New York: Basic. (Original work published 1937) Piaget, J. (1983). Piaget’s theory. In P. Mussen (Ed.), Handbook of child psychology (4th ed., pp. 117–120). New York: Wiley. Notes 1. An earlier version of this article was presented at the Teaching Educational Psychology Symposium annual meeting of the American Educational Research Association, New Orleans, LA, April 2000. 2. Send correspondence to Joe D. Nichols, School of Education, Indiana University–Purdue University Fort Wayne, Fort Wayne, IN 46805; e-mail: nicholsj@ipfw.edu. Using Student Scholarship to Develop Student Research and Writing Skills Mark E. Ware Amy S. Badura Creighton University Conclusions Stephen F. Davis Emporia State University This exercise in the discovery and application of Piaget’s cognitive theory encourages a reflective attitude in students who see and experience the disequilibrium that Piaget beVol. 29, No. 2, 2002 We illustrate the use of psychology student publications for teaching (a) principles in experimental methodology, (b) the use and in151 terpretation of statistical tests, and (c) the quality and format of American Psychological Association (APA) writing style. We present several exercises for each of these topical areas using published examples of student scholarship as teaching tools. We also identify published instances of errors in design, statistical use, and APA style. We contend that using students’ scholarly publications can promote and reinforce development of research and writing skills that combine active participation and critical thinking. Several issues confronted participants at the 1991 National Conference for Enhancing the Quality of Undergraduate Education in Psychology (McGovern, 1993). Two of the major themes for improving education emphasized increasing students’ active participation in learning (Mathie et al., 1993) and critical thinking skills (Halpern et al., 1993). Since then, several authors have discussed strategies for improving active learning and critical thinking (Halonen, 1995; Henderson, 1995; Hubbard & Ritchie, 1995; Perry, Huss, McAuliff, & Galas, 1996; Seegmiller, 1995; Wade, 1995). Yet another strategy combining active participation and critical thinking consists of in-class exercises using published student scholarship. Several educators have reported on the value of professional literature as a teaching device. Suter and Frank (1986) used published literature to illustrate core concepts to students. Hubbard and Ritchie (1995) stimulated critical thinking through the assessment of scholarly work, and Carkenord (1994) motivated students through the reading of professional literature. At a more fundamental level, Pennington (1992) had students read an abstract or parts of a method section to understand concepts such as independent and dependent variables. Although the use of published articles as teaching tools in the classroom is not new, this article’s emphasis on the value of student publications is relatively uncommon. One of the few examples presenting this technique is a research methods text (Smith & Davis, 2001). A review of student journals indicates a high degree of similarity between student and professional publications in terms of variety of topics, variables, and statistical analyses. However, student research reports tend to be shorter, more simplistic, and easier to comprehend than traditional professional articles. All these benefits allow educators to illustrate core concepts directly and efficiently. In addition to providing stimuli for engaging class exercises, published student articles also meet undergraduates’ desires for models of excellent student work to guide them in developing new skills. Therefore, student literature meets instructors’ needs to engage students in active learning and critical thinking and, at the same time, meets students’ needs to have instructors provide clear examples of outstanding undergraduate performance. Student articles can be effective teaching aids for a variety of research and writing skills. Published student articles are readily available as adjuncts to traditional reading assignments. Educators have established several journals devoted to the publication of undergraduate student research, such as the Journal of Psychological Inquiry (JPI), Psi Chi Journal of Undergraduate Research (PCJUR), and The Journal of Psychology and the Behavioral Sciences (JPBS). In this article, we illustrate how instructors can use published student scholarship for teaching principles of experimental methodology, the use and interpre152 tation of statistical tests, and the quality and format of American Pyschological Association (APA) writing style. Exercises Using articles appearing in the student journals listed previously, we developed exercises for each of three types of courses: research methods; statistics; and topical, content-based psychology courses. Instructors can use some exercises, particularly those involving writing assignments, for more than one course. Students can complete each of the exercises in this article within 10 to 25 min, making them ideal for use throughout the semester. Research Methods or Experimental Psychology An instructor can conduct this exercise when students are learning the fundamentals of research including independent, dependent, and extraneous variables. The instructor can assign an article as required reading one class period before conducting the exercise or require students to read the article during class; students need read only the introduction and method sections. Independently or in small groups, students can identify manipulated as well as measured variables. A group discussion about the identified variables would follow. We used Sheets (1999) to illustrate identification of independent variables (extraverted or introverted behavior; sex of participant), dependent variables (several Likert-type scales), control procedures (identical stimuli across conditions), and experimental design (2 × 2 ANOVA). Statistics For statistics, we found Bleeker, Evans, Fisher, and Miller’s (1998) article particularly helpful. We had students (a) determine one of the primary statistical tests (3 × 2 ANOVA), (b) determine the appropriate number of degrees of freedom, and (c) compare the calculated statistic with the appropriate tabled statistic. With the information available in Bleeker et al. (1998), students can confirm the accuracy of the reported ANOVA degrees of freedom and the decision to reject the null hypothesis. Students may also discover that reviewers and editors failed to find mistakes in reported results. Verbeck (1996) contained one error in the degrees of freedom because one student’s data were excluded. Recognizing such errors can inform students about the importance of developing their own critical thinking and copyediting skills as well as educating them about degrees of freedom. Instructors can also use Bleeker et al. (1998) to help understand another difficult concept: interaction. Instructors can ask students to identify, evaluate, and explain the main effects and interaction findings for the self-esteem two-factor ANOVA. Neither main effect was significant; however, the interaction between Gender × Group Participation was significant. Examining Figure 1 (Bleeker et al., 1998, p. 36), which showed the mean self-esteem scores, may help students understand the meaning of an interaction, as can an Teaching of Psychology examination of the discussion section that presented the statistical findings concretely. Topical Courses with Writing Assignments When students complain about difficulty in writing psychology manuscripts, they usually complain about writing fundamentals and about APA format. Other educators have described strategies for improving the quality of student’s literature reviews (Froese, Gantz, & Henry, 1998) and experimental reports (Ault, 1991; Dunn, 1996, 1999; Peden, 1994; Sternberg, 2000). Web sites list common violations in writing style among undergraduates (e.g., http://puffin.creighton. edu/psy/journal/freqerr.html) and offer self-tests that incorporate many of the more common errors in style and language (e.g., http://www.lemoyne.edu/OTRP/otrpresources/otrp_ sciwriting.html). Dunn et al. (2001) offered a checklist of common formatting errors for manuscripts submitted to the PCJUR. Expression of ideas. This exercise focuses on avoiding students’ common grammatical errors. Instructors can direct students to examine published articles for examples of proper use of difficult grammatical rules. For example, students often have particular difficulty grasping the difference between active and passive voice. APA style experts generally prefer active voice. Peluso (2000) appropriately used active voice throughout the introduction, clearly identifying the actors in the sentence as the researchers. Students can also examine Peluso’s method section to identify examples of acceptable use of passive voice when the focus of the sentence is “on the object or recipient of the action rather than on the actor” (APA, 2001, pp. 41–42). Mechanics of format: Title and abstract. Two simple in-class exercises involve asking students to check for APA format by counting the number of words in an article’s title and its abstract. According to the Publication Manual (APA, 2001), title length should be 10 to 12 words, and abstract length of a review article should be from 75 to 100 words. West and Berning’s (1999) JPBS article closely approximated both criteria. Mechanics of format: References. In this exercise, instructors can direct students to count the number of times each item in the references is cited in the article. Although the exercise has nothing to do with the frequency with which authors cite items, students can determine whether the article conforms to the APA requirement that all references cited in the text should appear in the reference list and vice versa. West and Berning (1999) cited all reference list items in the article. In another, more advanced APA-format exercise, instructors can give groups of students an article and direct them to produce the manuscript version for various sections, such as the title or the references. Many students are surprised at the differences between the published and manuscript versions. With respect to referencing, students seem to have great difficulty mastering the proper use of “et al.” and page numVol. 29, No. 2, 2002 bers when referencing in the text. Students can examine the West and Berning (1999) article and discover on the first page an example of a proper use of “et al.” after all the authors have been cited once. Students can also note that page numbers were used with direct quotes. On the second page, students will find a correct citation of “Zametkin et al. (1990)” denoting the first reference to that work in the article. Instructors can use this example to illustrate the proper use of et al. when a published work has six or more authors. However, the six or more author rule was violated on the third page of the article, in which West and Berning (1999) incorrectly cited “(Rounsaville, Anton, Carroll, Budde, Prusoff et al., 1991)” as well as omitted the comma that belongs between the last author’s name and et al. This erroneous citation demonstrates the difficult nature of mastering complicated citation rules and instructs students to identify and avoid format pitfalls before they complete major writing assignments. Finally, this example also illustrates that editors, reviewers, and copyeditors can overlook such errors. Conclusions The exercises and specific student reports outlined in this article are only a sample of the possible exercises and published student work available to educators. Instructors can purchase individual journals for classroom use, obtain copyright permission very inexpensively for use of published articles from JPI and PCJUR, or encourage students to view past issues of JPBS online. Further information about student research publications is readily available at Web sites for the three journals discussed in this article (JPI, http://puffin.creighton.edu /psy/journal/JPIhome.html; PCJUR, http://www.mercyhurst. edu/UPD/UPDdescriptions.htm#PsiChi; JPBS, http://alpha. fdu.edu/psychweb/JPBS. htm). Additionally, a Society for the Teaching of Psychology Web site (http://www.lemoyne.edu/ OTRP/otrpresources/otrp_undergrad.html) contains information about those and other publication opportunities for undergraduate students. As a final encouragement for educators to include some of the exercises described in their research methods, statistics, or content-based courses, we summarize some of our students’ observations about these exercises. Students reported that they learned how to apply experimental concepts and statistics to research, format statistical analyses in written text, and improve their writing style. In summary our experience suggests that students can learn concepts, principles, and writing style from student-generated research. Future research should formally and systematically investigate the pedagogical opportunities that these and similar exercises offer for developing student research and writing skills. References American Psychological Association. (2001). Publication manual of the American Psychological Association (5th ed.). Washington, DC: Author. Ault, R. L. (1991). What goes where? An activity to teach the organization of journal articles. Teaching of Psychology, 18, 45–46. 153 Bleeker, M. M., Evans, S. C., Fisher, M. N., & Miller, K. A. (1998). The effects of extracurricular activities on self-esteem, academic achievement, and aggression in college students. Psi Chi Journal of Undergraduate Research, 3, 34–38. Carkenord, D. M. (1994). Motivating students to read journal articles. Teaching of Psychology, 21, 162–164. Dunn, D. S. (1996). Collaborative writing in a statistics and research methods course. Teaching of Psychology, 23, 38–40 Dunn, D. S. (1999). The practical researcher: A student guide to conducting psychological research. New York: McGraw-Hill. Dunn, J., Ford, K., Rewey, K. L., Juve, J. A., Weise, A., & Davis, S. F. (2001). A modified presubmission checklist. Psi Chi Journal of Undergraduate Research, 6, 142–144. Froese, A. D., Gantz, B. S., & Henry, A. L. (1998). Teaching students to write literature reviews: A meta-analytic model. Teaching of Psychology, 25, 102–105. Halonen, J. S. (1995). Demystifying critical thinking. Teaching of Psychology, 22, 75–81. Halpern, D. F., Appleby, D. C., Beers, S. E., Cowan, C. L., Furedy, J. J., Halonen, J. S., Horton, C. P., Peden, B. F., & Pittenger, D. F. (1993). Targeting outcomes: Covering your assessment concerns and needs. In T. V. McGovern (Ed.), Handbook for enhancing undergraduate education in psychology (pp. 23–46). Washington, DC: American Psychological Association. Henderson, B. B. (1995). Critical-thinking exercises for the history of psychology course. Teaching of Psychology, 22, 60–63. Hubbard, R. W., & Ritchie, K. L. (1995). The human subjects review procedure: An exercise in critical thinking for undergraduate experimental psychology students. Teaching of Psychology, 22, 64–65. Mathie, V. A., Beins, B., Benjamin, L. T., Jr., Ewing, M. M., Hall, C. C. I., Henderson, B., McAdam, D. W., & Smith, R. A. (1993). Promoting active learning in psychology courses. In T. V. McGovern (Ed.), Handbook for enhancing undergraduate education in psychology (pp. 183–214). Washington, DC: American Psychological Association. McGovern, T. V. (Ed.). (1993). Handbook for enhancing undergraduate education in psychology. Washington, DC: American Psychological Association. Peden, B. F. (1994). Do inexperienced and experienced writers differentially evaluate Ault’s (1991) “What Goes Where” technique? Teaching of Psychology, 21, 38–40. Peluso, E. A. (2000). Skilled motor performance as a function of types of mental imagery. Journal of Psychological Inquiry, 5, 11–14. Pennington, H. (1992). Excerpts from journal articles as teaching devices. Teaching of Psychology, 19, 175–177. Perry, N. W., Huss, M. T., McAuliff, B. D., & Galas, J. M. (1996). An active-learning approach to teaching the undergraduate psychology and law course. Teaching of Psychology, 23, 76–81. Seegmiller, B. R. (1995). Teaching an undergraduate course on intrafamily abuse across the life span. Teaching of Psychology, 22, 108–112. Sheets, K. J. (1999). Effects of extraversion and introversion on job interview success. Journal of Psychological Inquiry, 4, 7–11. Smith, R. A., & Davis, S. F. (2001). The psychologist as detective: An introduction to conducting research in psychology (2nd ed.). Upper Saddle River, NJ: Prentice Hall. Sternberg, R. J. (Ed.). (2000). Guide to publishing in psychology journals. Boston: Cambridge University Press. Suter, W. N., & Frank, P. (1986). Using scholarly journals in undergraduate experimental methodology courses. Teaching of Psychology, 13, 219–221. Verbeck, A. (1996). Perceived attractiveness of men who cry. Journal of Psychological Inquiry, 1, 5–10. Wade, C. (1995). Using writing to develop and assess critical thinking. Teaching of Psychology, 22, 24–28. 154 West, C., & Berning, J. (1999). Self-medication with sucrose in attention deficit hyperactivity disorder. The Journal of Psychology and the Behavioral Sciences, 13, 20–26. Notes 1. We thank Randolph A. Smith and four anonymous reviewers for their original ideas and important contributions to this article. 2. Send correspondence to Mark E. Ware, Department of Psychology, Creighton University, Omaha, NE 68178; e-mail: meware@ creighton.edu. Coverage of Industrial/Organizational Psychology in Introductory Psychology Textbooks: An Update Douglas C. Maynard Karen L. Geberth Todd A. Joseph State University of New York at New Paltz Carlson and Millard (1984) found that Introductory Psychology textbooks provided limited if any coverage of industrial/organizational (I/O) psychology. We examined current Introductory Psychology textbooks (N = 54) for the presence of a section, appendix, or chapter on I/O psychology. We also coded textbooks for the number of pages containing I/O content. Results were similar to those found previously; only about one fourth of textbooks contained an overview of the field in some form. Full-length textbooks were more likely than brief versions to contain an I/O section, appendix, or chapter. On average, less than 2% of the total number of textbook pages contained work-related concepts or examples; this percentage was similar for full-length and brief textbooks. We justify the importance of increased coverage of I/O psychology in future text editions. Student exposure to industrial/organizational (I/O) psychology in Introductory Psychology courses would be beneficial for several reasons. First, because more psychology departments are offering I/O courses (Perlman & McCann, 1999), early exposure to the field would help students make more informed decisions about taking an I/O course. Second, instructors can use I/O psychology examples to demonstrate how psychologists apply core psychological concepts (e.g., motivation) to real-world problems. Third, psychology undergraduates are more likely to find employment in business and management than any other occupational area (“A Look at Recent Baccalaureates in Psychology,” 2000). Fourth, the general public lacks awareness of the field of I/O psychology (Gasser et al., 1998); student exposure to I/O in a course as popular as Introductory Psychology would help to ameliorate this problem. Textbook coverage of I/O psychology, which is outside of the expertise of most instructors, may determine the likelihood that these instructors discuss the field during class time. Carlson and Millard (1984) conducted a content analysis of Teaching of Psychology Introductory Psychology texts and found that only one fourth of these texts included a section on I/O psychology; another fourth failed to include any material directly related to the field. Several changes have taken place in the Introductory Psychology textbook market since then (Griggs, Jackson, Christopher, & Marek, 1999). For example, fewer textbooks exist, but brief versions of full-length texts are now available. Texts generally feature fewer chapters, partly because fewer topics receive two-chapter coverage. However, the number of pages per text has increased sharply. These changes may have allowed authors to devote more text space to less traditional topics such as I/O psychology (Griggs et al., 1999). Thus, a reexamination of I/O psychology coverage in Introductory Psychology textbooks seems appropriate. We examined I/O psychology content in current texts and also compared the coverage of I/O psychology in full-length versus brief versions. Method Jackson, Griggs, Koenig, Christopher, and Marek (2000) identified 57 Introductory Psychology textbooks published between 1997 and 2000. We did not code 4 textbooks (Bourne & Russo, 1998; Dworetzky, 1997; Fernald, 1997; Wallace & Goldstein, 1997) because the publishers of these textbooks indicated that there were no plans to publish new editions. We coded each of the remaining 53 textbooks (37 full-length and 16 brief) plus 1 new full-length textbook (Passer & Smith, 2001) for the presence of an I/O section, chapter, or appendix by carefully scanning the table of contents. Six of the full-length textbooks and 2 of the brief textbooks we coded were new editions with a 2001 copyright. We also identified the total number of pages that contained work-related content by searching textbook indexes for entries related to I/O psychology, using a list of 16 work-related terms (see Table 1). We counted any index entry that matched the term or had the term as its root (e.g., job enrichment for the term job). We then arrived at a percentage of pages with I/O content by dividing by the total number of pages. When it was not clear whether an index entry actually referred to I/O content, we checked the actual page or pages indexed. Two separate researchers coded each textbook. We resolved any discrepancies via joint examination of the textbook. Results Only 2 texts contained a full chapter on I/O psychology (3.8%), and for 1 of these texts (Huffman, Vernoy, & Vernoy, 2000), the chapter was not part of the bound text and had to be requested by the instructor as a supplement. At least 1 text (Baron, 2001) had an I/O psychology chapter in the previous edition but not in the current one. Fourteen texts (25.9%) contained some overview of the field of I/O, either as a chapter, appendix, or section; this percentage is almost identical to the 25% found by Carlson and Millard (1984). References of texts with substantial coverage of I/O psychology appear in the Appendix with the type of coverage Vol. 29, No. 2, 2002 Table 1. Terms Used in Searching Textbook Indexes Business Burnout Conflicta Discriminationa Employee/employer Industrial Job/job performance Leadership Management Motivationa Organizations/organizational Personnel Stressa Testinga Vocation/vocational Work/workplace a Work related. (section, appendix, or chapter) and the number of pages in parentheses following the reference. Texts varied in the percentage of pages that included at least some I/O content, from 0% to 5.7% (M = 1.6%, Mdn = 1.4%). By comparison, Carlson and Millard (1984) found an average percentage of 0.4%. Eight textbooks (14.8%) contained no work-related concepts or mentioned I/O psychology only briefly in the introductory chapter. A comparison of full-length and brief versions reveals that full-length versions were more likely to contain a chapter, appendix, or section on I/O (12 of 38, 31.6%) than brief versions (2 of 16, 12.5%). However, the median percentage of pages containing at least some I/O material was similar for full-length and brief versions (1.4% and 1.3%, respectively). Discussion Based on the results of this content analysis, it appears that the amount of coverage of I/O psychology in Introductory Psychology textbooks has changed little over the past two decades (cf. Carlson & Millard, 1984). Both studies reveal that only one fourth of the textbooks contained an overview of the field of I/O psychology. Many of the current textbooks that do offer extensive coverage of I/O psychology marginalize the material by placing it in an appendix or offering it separately from the text. On the positive side, only three current textbooks (5.7%) failed to mention I/O psychology at all, as compared to 25% found by Carlson and Millard (1984). In addition, we found a slightly higher percentage of textbook pages with I/O related content than in the previous study (M = 1.6 vs. M = 0.4). However, the meaning of this increase is not clear because we do not know how many search terms Carlson and Millard (1984) actually used in searching the textbook indexes (they mentioned eight as examples). Furthermore, both studies overestimated the actual percentage of I/O coverage because many index entries refer to pages in the text that are only partially devoted to I/O content (e.g., one paragraph). 155 Introductory Psychology instructors can expose students to I/O psychology by adopting 1 of the 14 texts listed in the Appendix that do provide substantial coverage of I/O psychology. Instructors who have adopted different texts can invite faculty trained in I/O psychology to present a guest lecture. Additionally, instructors can use An Instructor’s Guide for Introducing Industrial-Organizational Psychology (Bachiochi et al., 1999), a resource developed by the Education and Training Committee of the Society for Industrial and Organizational Psychology (SIOP) for instructors with relatively little exposure to the field (Bachiochi et al., 1999; Bachiochi & Major, 1999). The Guide consists of ready-to-present modules that are available on the Internet in PowerPoint® format, along with suggestions for exercises, discussions, supplemental readings, and videotapes. Despite these alternatives, I/O psychology will not receive consistent attention in Introductory Psychology courses until more authors include sections or chapters in their texts. Increased coverage of I/O psychology is warranted for several reasons. As mentioned earlier, exposure to I/O psychology concepts could increase public awareness of the field and inform student course registration decisions. Additionally, the field has experienced rapid growth in the last two decades (Smither, 1998). The employment outlook for those with graduate degrees in I/O psychology is quite promising, with varied opportunities in both applied and academic settings (Burnfield & Medsker, 1999; Schultz & Schultz, 1998). Texts should reflect the fact that I/O psychology is increasingly important and that individuals with advanced training in the area are currently in demand. Finally, because most adults spend about one third of their waking hours at work (Aamodt, 1999), Introductory Psychology texts should illustrate how psychologists apply scientific knowledge to real-world problems in an effort to improve productivity and maximize the quality of the work experience. With increased coverage, students will better appreciate the diversity of psychology and its relevance to everyday life. References Aamodt, M. G. (1999). Applied industrial/organizational psychology (3rd ed.). Pacific Grove, CA: Brooks/Cole. Bachiochi, P. D., Day, D., Kraiger, K., Lowenberg, G., Rentsch, J., & Stanton, J. (1999). An instructor’s guide for introducing industrial-organizational psychology. Retrieved March 30, 2001, from http:// www.siop.org/Instruct/InGuide.htm Bachiochi, P. D., & Major, D. (1999). Spreading the good word: Introducing I-O in introductory psychology. The Industrial-Organizational Psychologist, 37, 108–110. Baron, R. A. (2001). Psychology (5th ed.). Needham Heights, MA: Allyn & Bacon. Bourne, L. E., Jr., & Russo, N. F. (1998). Psychology: Behavior in context. New York: Norton. Burnfield, J. L., & Medsker, G. J. (1999). Income and employment of SIOP members in 1997. The Industrial-Organizational Psychologist, 36(4), 19–30. Carlson, S., & Millard, R. J. (1984). The treatment of industrial/organizational psychology in introductory psychology textbooks. Teaching of Psychology, 11, 243–244. Dworetzky, J. P. (1997). Psychology (6th ed.). Belmont, CA: Wadsworth. 156 Fernald, L. D. (1997). Psychology. Upper Saddle River, NJ: Prentice Hall. Gasser, M., Whitsett, D., Mosley, N., Sullivan, K., Rogers, T., & Tan, R. (1998). I-O psychology: What’s your line? The Industrial-Organizational Psychologist, 35(4), 120–126. Griggs, R. A., Jackson, S. L., Christopher, A. N., & Marek, P. (1999). Introductory psychology textbooks: An objective analysis and update. Teaching of Psychology, 26, 182–189. Huffman, K., Vernoy, M., & Vernoy, J. (2000). Psychology in action (5th ed.). New York: Wiley. Jackson, S. L., Griggs, R. A., Koenig, C. S., Christopher, A. N., & Marek, P. (2000). A compendium of introductory psychology texts: 1997–2000. Retrieved March 30, 2001, from http://www. lemoyne.edu/OTRP/introtexts.html A look at recent baccalaureates in psychology. (2000, January). APA Monitor, 31, 13. Passer, M. W., & Smith, R. E. (2001). Psychology: Frontiers and applications. New York: McGraw-Hill. Perlman, B., & McCann, L. I. (1999) The most frequently listed courses in the undergraduate psychology curriculum. Teaching of Psychology, 26, 177–182. Schultz, D. P., & Schultz, S. E. (1998). Psychology and work today (7th ed.). Upper Saddle River, NJ: Prentice Hall. Smither, R. D. (1998). The psychology of work and human performance (3rd ed.). New York: Longman. Wallace, P. M., & Goldstein, J. H. (1997). An introduction to psychology (4th ed.). New York: McGraw-Hill. Appendix Introductory Psychology Texts With Significant Coverage of Industrial/Organizational (I/O) Psychology Full-Length Texts Coon, D. (2001). Introduction to psychology: Exploration and application (9th ed.). Belmont, CA: Wadsworth. (section, 17 pages) Davis, S. F., & Palladino, J. J. (2000). Psychology (3rd ed.). Upper Saddle River, NJ: Prentice Hall. (chapter, 27 pages) Gerow, J., & Bordens, K. (2000). Psychology: An introduction (6th ed.). Carrollton, TX: Alliance. (section, 16 pages) Halonen, J. S., & Santrock, J. W. (1999). Psychology: Contexts of behavior (3rd ed.). New York: McGraw-Hill. (section, 14 pages) Hockenbury, D. H., & Hockenbury, S. E. (2000). Psychology (2nd ed.). New York: Worth. (Appendix, 17 pages) Huffman, K., Vernoy, M., & Vernoy, J. (2000). Psychology in action (5th ed.). New York: Wiley. (chapter, 37 pages) Lahey, B. B. (2001). Psychology: An introduction (7th ed.). New York: McGraw-Hill. (section, 20 pages) Lefton, L. A. (2000). Psychology (7th ed.). Needham Heights, MA: Allyn & Bacon. (section, 26 pages) Morris, C. G., & Maisto, A. A. (1999). Psychology: An introduction (10th ed.). Upper Saddle River, NJ: Prentice Hall. (Appendix, 24 pages) Sdorow, L. N. (1998). Psychology (4th ed.). New York: McGraw-Hill. (Appendix, 17 pages) Smith, B. D. (1998). Psychology: Science & understanding. New York: McGraw-Hill. (section, 15 pages) Weiten, W. (2001). Psychology: Themes and variations (5th ed.). Belmont, CA: Wadsworth. (Appendix, 21 pages) Brief Texts Coon, D. (2000). Essentials of psychology: Exploration and application (8th ed.). Belmont, CA: Wadsworth. (Appendix, 15 pages) Teaching of Psychology Rathus, S. A. (2000). Psychology: The core. Fort Worth, TX: Harcourt College. (section, 18 pages) Notes 1. We thank three anonymous reviewers for their helpful comments on an earlier version of this article. 2. Send correspondence to Douglas C. Maynard, Department of Psychology, SUNY New Paltz, 75 South Manheim Boulevard, Suite 6, New Paltz, NY 12477–2440; e-mail: maynardd@ newpaltz.edu. of the programs to the communities served (Hardy & Schaen, 2000; Raupp & Cohen, 1992). This program is a university–community partnership in which college students obtained field work experience as auxiliary math teachers in a public elementary school. The poor showing of U.S. students in cross-cultural comparisons of mathematics achievement (e.g., National Research Council, 1989; Stevenson, Lee, & Stigler, 1986) fostered my interest in developing a service-learning course that focused on teaching mathematics. I assessed this program from the point of view of the major stakeholders.1 Course Design and Implementation Teaching Psychology in the Context of a University–Community Partnership Kathy Pezdek Claremont Graduate University and Pomona College In this article I describe and assess a service-learning project in which college students, in the context of a psychology course, served as auxiliary math teachers in a public elementary school. They met with an assigned group of students 3 hr per week for a semester. The college students studied and then applied principles from a broad spectrum of psychology including cognitive, developmental, and educational psychology. This experience in applying psychology outside the classroom both enhanced their appreciation of psychology and fulfilled their desire for community service. The elementary school students benefited as well; mean math ability increased over the course of the semester. Never doubt that a small group of committed people can change the world. Indeed, it’s the only thing that ever has. —Margaret Mead (1964, p. 49) After 25 years as a psychology professor, I wanted to try something new. Recognizing that undergraduate students need to see applications of psychology to maintain their interest in the discipline, I developed a service-learning course in which students served as auxiliary math teachers in a public elementary school. Service learning in higher education dates back to the 1920s when civic education was considered a key factor in promoting a democratic society (Carver, 1997). Theoretically, the roots of service learning in this country are grounded in Dewey’s (1938) notion of experiential learning—the idea that there should be a link between students’ education and their lives outside of school. Although this educational philosophy has not always been popular, the number of service-learning college courses has increased in the last decade (Stukas, Clary, & Snyder, 1999; Underwood, Welsh, Gauvain, & Duffy, 2000). The success of service-learning courses has been documented both in terms of college student satisfaction (Chapdelaine & Chapman, 1999; Clements, 1995; Hardy & Schaen, 2000) and the usefulness Vol. 29, No. 2, 2002 I developed a four-unit undergraduate course, “Field Work in Applied Psychology: Teaching Mathematics.” Fifteen students (maximum enrollment for a field work course) enrolled. Nine students were psychology majors; most were second-year students. Each college student served as an auxiliary math teacher for a group of 10 or fewer fourth-, fifth-, or sixth-grade students. Each college student maintained the same math group throughout the semester and met with his or her math group twice a week for 90 min per session. This math instruction supplemented the regular math instruction by the classroom teachers. To promote the autonomy of the college student teachers, they met their math groups outside the regular classrooms. Students maintained field notes for each session taught and e-mailed them to me after each session. Students organized field notes around the following four subheadings: 1. What topic did you present? 2. What specific teaching strategies did you attempt? 3. How effective were the strategies you attempted to implement? 4. Notable insights. (What did you learn about your students, the teaching of mathematics, or yourself?) The classroom component of the course focused on the psychology literature on teaching and learning mathematics. Readings came from cognitive psychology, developmental psychology, and educational psychology. Principal texts for the course were The Learning Gap (Stevenson & Stigler, 1992) and Knowing and Teaching Elementary Mathematics (Ma, 1999). We also spent several class sessions discussing the videotapes of sample eighth-grade math classes from the Third International Mathematics and Science Study (National Center for Education Statistics, 1996). I required students to conduct an empirical research project on some aspect of teaching mathematics. This assignment gave them experience conceptualizing and conducting research in a specific applied setting. Ideas for research projects came from assigned readings, class discussions, and is1 The Evaluation Thesaurus (Scriven, 1991) defines stakeholder as, “One who has substantial ego, credibility, power, futures, or other capital invested in the program, and thus can be held to be to some degree at risk with it” (p. 334). 157 sues that occurred to the college students in teaching their math groups. Structure and Substance of the Math Instruction Superordinate goals for the elementary school students were to increase their mathematical fluency with basic skills that generalize to real-world problem solving and to enhance their appreciation of the importance of mathematics. To achieve these goals, I first provided the college students with the school district’s upper grade curriculum goals. During initial sessions with their math groups, college students assessed the competency of their students in terms of these various goals. They then prioritized topics to include in subsequent lesson plans. They met regularly with their supervising classroom teachers for input regarding appropriate math topics to consider. Weekly meetings of the college class focused on pedagogical techniques and the psychological research from which they were derived. The two most frequently discussed pedagogical techniques had been identified as best practices from cross-cultural research on math instruction (Stigler, Lee, & Stevenson, 1987). I encouraged the college math teachers to employ a problem-based approach to teaching (e.g., Stigler & Stevenson, 1991); every lesson was to be motivated by a specific real-world problem. As part of the problem-based approach, the college students used the Exemplars program (www.exemplars.com). This program includes a large corpus of teacher-developed and teacher-tested math problems on a wide range of topics. Based on the second pedagogical technique, I encouraged the college students to focus on teaching the conceptual process by which solutions are reached rather than simply whether a correct answer is obtained. frustrating than teaching a traditional course, it was also more rewarding. I liked the changes that I observed in the college students, and it was exciting to watch the change in the elementary school students’ enthusiasm for math. Students’ comments in class as well as their field notes revealed that, over the semester, most became more respectful of the difficult job that teachers face, less externalizing of the problems in schools, more sensitive to the needs of school-age children, and more aware of the need for research on classroom learning. Four of the 15 students in the class made arrangements to continue working with their math group as volunteers the following year. Goals of the College Students and Results At the end of the course, I asked the college students to specify their top three personal reasons for enrolling in this course and to indicate, on a 5-point scale ranging from 1 (goal not satisfied) to 5 (goal satisfied), the fulfillment rating for each. The most frequently mentioned goal for taking this class is best expressed in a student’s statement that, “I really love working with children, so I hoped to form strong relationships with all of my students. I wanted to feel like I was having some impact on their lives.” For the 7 students who expressed this goal, the mean fulfillment rating was 4.57 (SD = .49). The second most frequently mentioned goal was to explore teaching as a possible career choice. For the 6 students who expressed this goal, the mean fulfillment rating was 4.67 (SD = .47). The third most frequently mentioned goal is reflected in the statement, “I am interested in a change from the intellectual/abstract/classroom basis of learning predominant among college courses.” For the 3 students who expressed this goal, the mean fulfillment rating was 5.00 (SD = 0). Obtaining Buy-In2 From the Participating School Goals of the Principal and Teachers and Results The school selected for this program is two blocks from the Claremont Colleges. I met with the principal and the five participating teachers several times prior to beginning the program. The program was optional for the teachers; all chose to participate. Although each teacher welcomed the program enthusiastically, much time was necessary to ensure the teachers’ cooperation with the goals and structure of the program. Program Goals of the Major Stakeholders and Results Goals of the Professor and Results I developed this program to energize my teaching, and this goal was more than satisfied. Although setting up and running this program was more effortful, time consuming, and 2 The term buy-in is a common concept in the program evaluation literature. It refers to the process of obtaining the cooperation of the major stakeholder. 158 The principal and teachers welcomed this program because they wanted an intervention that increased their students’ math ability. Through this program, every upper grade student in the school received an extra 3 hr of math instruction each week of the semester, in a group of 10 or fewer students. Fourteen students randomly selected from the program were interviewed after the program to assess their perceptions. The modal comments were that the program increased their confidence in math and they found math more enjoyable. To assess the effectiveness of the program in terms of the grade-school students’ math achievement, we compared performance on the math subtests of the Stanford 9 standardized test in April of the year in which this intervention program was implemented with performance of the same students on the same subtests the previous year. Because Stanford 9 scores are presented as percentile ranks, without an enhanced instructional intervention, students typically stay at the same percentile rank from year to year. For the students in our program, the mean improvements were 5% for fourth graders, 12% for fifth graders, and 13% for sixth graders. Teaching of Psychology Conclusions For both the professor and the college students, this course was more demanding and time consuming than a typical college course. However, the rewards were numerous. It was gratifying for the college students to watch the metamorphosis of their students from math underachievers to competent and enthusiastic students of math. Although it is not possible to draw a causal link between this service-learning program and the improvement in math scores of the elementary school students over the course of the semester, the improvements were applauded by the principal and teachers. This success served to enhance the college students’ appreciation of psychology by seeing psychological principles effectively applied. For me, developing this university–community partnership for teaching psychology was far more engaging than traditional classroom teaching. Similar opportunities could be developed for teaching reading, science, or other academic disciplines. Within psychology there is a rich research literature on the principles underlying teaching and learning within each of these academic domains. References Carver, J. (1997). Theoretical underpinnings of service-learning. Theory into Practice, 36, 143–149. Chapdelaine, A., & Chapman, B. L. (1999). Using community-based research projects to teach research methods. Teaching of Psychology, 26, 101–105. Clements, A. D. (1995). Experiential-learning activities in undergraduate developmental psychology. Teaching of Psychology, 22, 115–118. Dewey, J. (1938). Experience and education. New York: McMillan. Hardy, M. S., & Schaen, E. B. (2000). Integrating the classroom and community service: Everyone benefits. Teaching of Psychology, 27, 47–49. Vol. 29, No. 2, 2002 Ma, L. (1999). Knowing and teaching elementary mathematics. Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Mead, M. (1964). Continuities in cultural evolution. New Haven, CT: Yale University Press. National Center for Education Statistics. (1996). Pursuing excellence: Initial findings from the Third International Mathematics and Science Study. Washington, DC: U.S. Department of Education. National Research Council. (1989). Everybody counts: A report to the nation on the future of mathematics education. Washington, DC: National Academy Press. Raupp, C. D., & Cohen, D. C. (1992). “A thousand points of light” illuminate the psychology curriculum: Volunteering as a learning experience. Teaching of Psychology, 19, 25–30. Scriven, M. (1991). Evaluation thesaurus (4th ed.). London: Sage. Stevenson, H. W., Lee, S.-Y., & Stigler, J. W. (1986). Mathematics achievement of Chinese, Japanese, and American children. Science, 231, 693–699. Stevenson, H. W., & Stigler, J. W. (1992). The learning gap. New York: Simon & Schuster. Stigler, J. W., Lee, S.-Y., & Stevenson, H. W. (1987). Mathematics classrooms in Japan, Taiwan, and the United States. Child Development, 58, 1272–1285. Stigler, J. W., & Stevenson, H. W. (1991, Spring). How Asian teachers polish each lesson to perfection. American Educator, 12–47. Stukas, A. A., Jr., Clary, E. G., & Snyder, M. (1999). Service-learning: Who benefits and why (Social Policy Report Vol. 13, No. 4). Ann Arbor, MI: Society for Research in Child Development. Underwood, C., Welsh, M., Gauvain, M., & Duffy, S. (2000, November). Learning at the edges: Challenges to the sustainability of service-learning in higher education. Journal of Language and Learning Across the Disciplines, 4, 7–26. Note Send correspondence and requests for the complete syllabus and reading list to Kathy Pezdek, Department of Psychology, Claremont Graduate University, Claremont, CA 91711; e-mail: Kathy. Pezdek@cgu.edu. 159