Bias in Evaluating the Required Methods Course

advertisement

Bias in Evaluating the Required Methods Course

Joseph F. Fletcher and Michael Painter-Main

University of Toronto

Prepared for Presentation at the American Political Science Association

Teaching and Learning Conference, Washington DC, February 17, 2012.

Bias in Evaluating the Required Methods Course

Abstract

Undergraduate Political Science programs often require students to take a research methods course. And such courses are typically among the most poorly rated. This is due, in part, to the way in which such courses are evaluated. At our university and elsewhere students are asked whether or not they would retake a course, “disregarding…program or degree requirements.”

This formulation artificially inflates the number of negative responses and thus substantially lowers the “re-take” rate widely used by students, instructors and administrators to assess a course. By adding an additional question to the standard course evaluation in our required methods course we found evidence that posing the question in this way introduces considerable bias into the “re-take rate”. In asking students whether they would retake the course we ask them to consider its value in preparing them for further study and future work. The results show markedly fewer negative responses, resulting in a more positive overall evaluation. We locate these results in the framing and course evaluation literatures and show through multivariate analyses that asking students to disregard requirement status actually cues them to take it into consideration.

1. Introduction

Course evaluations are a common part of contemporary university and college life. Near the end of each term students are typically asked to evaluate the course on a standardized form. To reduce bias in the responses, the instructor is asked to leave the room while the students complete the evaluation. The results are generally published prior to the subsequent term, ostensibly to aid students in the selection of their courses. While a great deal of information is collected on the evaluation forms, in practice students, professors and administrators tend to focus upon a single item, colloquially referred to as the retake rate. This is the percentage of students indicating that in light of their experience they would still take the course. This figure is prominently displayed in the summaries made available to students at the beginning of each subsequent term. Moreover, departmental chairs, such as our own, often refer to this retake rate in their annual evaluation of the teaching staff. Promotion and tenure committees do likewise.

Accordingly, professors are highly interested in this figure.

Having worked for many years as instructors in a required research methods course, naturally we too have become interested in the retake rate. This course has consistently received one of the lowest retake rates in the department. And we have heard about this in our annual reviews. Our efforts over the years to increase the retake rate for the course have met with only marginal success. Complaining of this to one of our colleagues, he mentioned that research methods courses are often among the most poorly rated across the disciplines. And indeed we consoled ourselves for several years by comparing our course’s retake rates to those of similar courses in other disciplines.

1

A few years back we decided to look closely at the question behind the retake rate that appears on the standardized form used across the Faculty of Arts and Science at our University. It reads

“ Considering your experience with this course, and disregarding your need for it to meet program or degree requirements, would you still have taken this course?

It occurred to us that this item was constructed, likely with the best of intentions, in at least partial awareness that required courses are likely regarded somewhat differently by students than are elective ones. No doubt the authors of this question were seeking to allow for this fact by inserting the parenthetical phrase “disregarding your need for it to meet degree or program requirements."

Presumably, it was thought this instruction would aid students in disregarding the status of the course as either required or elective in making their evaluations. Unfortunately, even the best of intentions can go awry and have unintended consequences.

2. Literature Review

Even a passing awareness of the framing literature suggests that different responses can be elicited by different question wordings. As the work of Kahneman (2011) reminds us questions can highlight different aspects of the same situation with considerable impact upon the answers provided. In the current situation the phrasing designed, no doubt, to get students to discount their feelings about the required status of the course may well prime or increase the salience or availability of those same feelings. Higgins (1996) tells us, "priming occurs when a stimulus subconsciously activates existing memories.” In the present instance the mere mention of a requirement may enhance the accessibility of feelings about required courses, inadvertently priming the very considerations that the question is presumably meant to avoid. Faced with similar problems survey researchers have become particularly attentive to framing effects which may lead to priming (see for example Zaller 1992; Kinder 2003; Nelson and Kinder 1996; Chong and Druckman 2007; Druckman 2004; Johnson 2011; Gamson and Modigliani 1987).

There is, of course, also a considerable literature on course evaluations. It has focused on three overarching themes: the teacher, the student and the course. Perhaps the most studied aspect of course evaluations is devoted to student evaluations of teaching. Aspects underlying teaching evaluations include the experience of the teacher, the implements used in the course by the teacher, as well as a teacher’s personality and behaviour toward the students (Marsh and Dunkin

1992; Simpson and Siguaw 2000; Clayson 1999). Student features lie largely with personal identity or traits held. These include gender (Tatro 1995) part/full-time nature of the student, enthusiasm going into a class and the length of their university experience (Frey, Leonard and

Beatty 1975). Finally, the nature of the course has been given attention involving issues of course difficulty and expected grades (Pounder 2007; Braskamp and Ory 1994; Marsh 1987) and whether the class is required or an elective, with required courses seen as harder and producing lower expected grades leading to lower course evaluations (Wachtel 1998; Scherr and Scherr

1990; Petchers and Chow 1988; Darby 2006; Lovell and Haner 1955; Pohlmann 1975; Landis

1977; Pounder 2007).

Research focusing on required courses suggests that students bring prior attitudes toward the class to bear in their course evaluations (Barke, Tollefson and Tracy 1983). In particular, prior interest has been found to be a good predictor of overall evaluation of the class (Barth 2008;

Francis 2011). A second prior consideration is the degree of initial enthusiasm a student brings

2

to a course, with dread, and resentment being common responses in having to take a required course (Coleman and Conrad 2007). Such feelings are often linked to pre-conceived notions of the course (Heise 1979; 2002). Perhaps the poster child for such feelings is the required methods or statistics course, particularly in the social sciences, as many students are reluctant to take quantitative methodology courses (Alvarez 1992). Lewis-Beck (2001) surmises that this is a result of students lacking an inclination for mathematics, as well as possessing little prior experience in the subject matter. Such prior negative attitudes and beliefs greatly affect how the course is evaluated (Gal and Ginsburg 1994). Hence statistics-oriented courses tend to produce poorer student ratings as compared to non-statistics courses (Colman and Conrad 2007).

3. Methodology

The data for this study come from University of Toronto students taking a full year quantitative methods course in Political Science between 2009 and 2011.

1

The course is a required class for specialists, a designation applied to those whose primary focus is in the study of politics.

2

In each instance students voluntarily filled out course evaluations near the end of term.

Insofar as the questions on the standardized course evaluations were not under our control we could not experiment with their wording. So we took advantage of an opportunity to pose additional questions at the end of the standard evaluation. The additional items appeared on a separate sheet and students were asked to enter their responses on the standardized form in a special area designated for "additional statements are questions which may be supplied in class.”

One of the additional items we included asked “ Considering the value of this course in preparing for future study and future work, would you still have taken this course?

" The options of yes and no were available, though some students chose to leave the item blank. In selecting this particular wording we do not mean to arrive at a perfectly unbiased question. Instead we sought to provide students with an alternative frame for the retake item, one that primed different considerations than the traditional question. We decided a reasonable alternative frame in evaluating a course might cue students to consider their possible future in academics or working.

Data analyses are conducted using cross-tabulation and logistic regression. Our goal is not to consider all of the potential factors that may affect course evaluations; others have done this.

Rather our purpose is to determine why the two versions of the question produce such different results. For this we will look at factors such as whether the course is a required one for the student, initial enthusiasm, perceived course difficulty, grade expectations and the value of the course as a learning experience. Our analysis enables us to consider how these factors differentially predict responses to the two forms of the retake question.

4. Findings

Table 1 presents the results for the traditional retake question as well as for the revised version we introduced in recent classes. As the left-hand column shows the traditional question produces

1 All classes were taught by the same instructor precluding any examination of instructor effects.

2 For example, specialists are required to take half of their 20 credits in political science courses. Political Science majors must take seven.

3

roughly 40-60 split with most students indicating that they were unwilling to retake the course.

In the right-hand column however we see a complete reversal of this distribution. Asking the revised question results in the distribution reversing; 60% of the respondents now favor retaking the course. The Chi Square analysis indicates that there is an extremely small chance (p < .004) that these two questions are measuring the same thing.

Evidently how one frames the retake question makes a difference. And this difference is not merely statistical. Depending upon which question is used the majority flips from negative to positive. In particular, framing the retake question in the traditional way produces the majority who say they are unwilling to retake the course. Alternatively, framing the retake question in a way that highlights future considerations produces a majority saying they are willing to retake the course. The practical implications of this are considerable in so far as students, professors and administrators are likely to draw quite a different conclusion about the course based upon which question wording is used. Of course the traditional question has been the standard and we have been evaluated according.

Table 1. Willingness to Retake Required Methods Course by Question Type

Retake Course?

Yes

No

N

Traditional Question

42.8%

57.2

(152)

Revised Question

59.4%

40.6

(155)

Source: University of Toronto Student Evaluations for POL242 Research Methods 2009-2011.

Chi 2 = 8.46 with 1df; p =.004

Since both the traditional and the revised questions were asked of all respondents, we are able to cross-tabulate the answers given to the two questions. The results are presented in Table 2. The resulting pattern is striking. As can be seen in the first column, every student who indicated that he or she would retake the course using the traditional question also indicated a willingness to retake using the revised question. In other words there were no false positives created by the traditional frame. Moving to the second column of Table 2, however, suggests a considerable number of false negatives. Over one quarter of those who said they would not retake the course using the traditional question answered that they would retake the very same course using the new question. The Chi Square test indicates that there is a strong probability (P. = .000) that the two columns statistically differ. The McNemar test, specifically designed to determine whether the same respondents differ in their answers on two measures, produces an exact significance of

.000.

Table 2. Response to Revised Retake Question by Traditional Question Response (Yes=1; No=0)

Revised Retake Question

Response

Yes

No

N

Yes

100%

0

(63)

No

28.4%

71.6

(81)

Source: University of Toronto Student Evaluations for POL242 Research Methods 2009-2011.

Chi 2 = 75.6 with 1df; p =.000; McNemar Exact significance =.000.

4

Clearly there is a difference between the responses obtained using the two versions of the retake question. But why is there such a difference? In order to investigate we look at whether different factors predict responses to the two versions of the question. For this we require a multivariate approach. And since the response options are in each instance dichotomies (1=yes; 0=no), we use logistic regression.

Table 3. Predictors of Traditional & Revised Retake Questions-Logistic Regression

Traditional Question

-.90 (.45)*

.70 (.35)*

Revised Question

-.58 (.47)

.49 (.41)

Pre-Course Factors

Required Status

Enthusiasm at Time of

Registration

In-Course Factors

Level of Difficulty

Expected Grade

Learning Experience

Constant

Summary Measures

-2 LL (initial; model)

R

L

2

; Cox & Snell; Nagelkerke

% of Cases Correctly Classified

N

-.41 (.25)

.39 (.29)

1.05 (.24)***

-.2.73 (2.76)

(152.7; 107.5)

(.30; .33; .45)

75.2

(113)

-.47 (.26)

.76 (.31)**

1.23 (.26)***

- 4.59 (2.96)

(150.9; 96.6)

(.36; .38; .52)

78.6

(112)

Source: University of Toronto Student Evaluations for POL242 Research Methods 2009-2011.

*p<.05 **<p.01 ***p<.001

We pursue this notion of differential predictors using paired analyses.

3

The results are presented in Table 3. Logistic regression allows us to estimate the effectiveness a number of independent variables in predicting the (logged) probability of a dichotomous dependent variable. In this case the dependent variable is the probability of answering “yes” rather than “no” to the retake question. Separate equations using the same predictors are presented for both the traditional and revised versions of the retake question.

Looking first at the summary measures presented at the bottom of the table we see that the overall fit of both models is very good. The initial -2 log likelihood (-2LL), analogous to the total sum of squares in ordinary regression, represents the error variation of the model with only the intercept included. The model -2 LL, like the error sum of squares in linear analyses, is the

3

The multivariate models exclude approximately 27% of the respondents due to missing data on at least one of the independent variables. However, Green (1991) suggests a minimum sample size of 50. To increase the sample size responses for 17 cases on the learning experience question were imputed based on a close examination of the answer sheets. Their inclusion does not substantially affect the analysis.

5

variation remaining in the model after the predictors are included. The difference between these two, divided by the initial -2LL can be interpreted as the proportional reduction in the absolute value of the -2LL. Menard (2010: 54) considers this to be the best measure of model fit. Like R 2 , it provides a quantitative indication of the extent to which the combination of independent variables in the model allows us to predict responses on the dependent variable. It is therefore sometimes referred to as a pseudo-R 2 . Adjusted versions of this coefficient by Cox and Snell and

Nagelkeke are also presented. On all of these measures the independent variables in the equation markedly assist in explaining variation in the dependent variables. Similarly, the percent of cases correctly classified shows the extent to which respondents are correctly classified into the discrete yes and no categories of the dependent variable. For both equations the percentages show a marked improvement over the roughly 58% possible using the modal category for each dependent variable as seen in the frequency distributions of Table 1. So both qualitative prediction and quantitative prediction are robust. The R 2 and case classification measures both suggest the fit of the model for the revised question is marginally better than that for the traditional question, but the difference is relatively modest at best.

The upper portion of Table 3 presents unstandardized logistic regression coefficients (and their standard errors) which are used in the model to predict the dependent variable.

4

These coefficients closely parallel the unstandardized coefficients presented in an ordinary least squares regression, however logistic regression coefficients are expressed as estimates of a variable’s influence on the log odds of the dependent variable (or logit). Reading and writing about a variable’s influence on the log odds of retaking a statistics course is undoubtedly awkward, but it accurately expresses the logged units of the dependent variable. In this particular instance comparisons of the variables across models is somewhat facilitated by their being derived from the same data set and hence adding roughly comparable standard errors. Nevertheless, because each of the independent variables may be measured in different units, comparing the absolute size of their respective coefficients within a particular equation can be misleading; they must be viewed within the context of their respective standard errors. Accordingly, it is particularly handy to attend to the statistical significance of independent variables.

Looking across the first line of the table, the negative coefficients for required status indicate that, controlling for the other variables in the model, the retake rate is lower among those for whom the course is required. The sign on the coefficient is the same for both equations but their magnitudes differ markedly. Referring to the left-hand column of table 3 in the equation predicting answers on the traditional retake question, we see that the required status of the course results in a .90 unit decrease in the log odds of students’ willingness to retake the course.

5

Moving the second column we see that required status produces a decrease of .58 units using the revised question. Viewed within the context of their respective standard errors, the effect of the course being required is substantially greater using the traditional measure than with the revised measure. And insofar as the standard errors are roughly equivalent, one might observe that the effect of the course’s required status is reduced by about a third in using the revised wording.

Moreover, the effect is statistically significant in the left-hand column but not on the right. In

4 Standardized logistic regression coefficients are rarely reported as they are not commonly available in contemporary software packages. (Menard, 2010:89).

5 As this predictor has multiple categories, the coefficient indicates the effect for a one unit change. Recoding the independent variable into a required versus non-required dichotomy produces very similar results.

6

other words, the required status of course is the significant predictor of retake rate using the traditional question but not the revised question.

Similarly, a question asking students about their initial enthusiasm for taking the course significantly predicts the log odds (or logit) for the traditional retake question but not on the revised one. Looking at the second row on Table 3 we see that an increase in initial enthusiasm is associated with an increase of .70 units in the logit (or log odds) for the traditional question wording but an increase of only .49 units with the revised wording. Once again, the effect is cut by about one third using the new wording. And viewed within the context of their respective standard errors the effect of students’ initial enthusiasm is statistically significant using the traditional question whereas the effect is not beyond what one would expect by chance using the revised question.

Taken together, the first two rows of coefficients in Table 3 suggest that pre-course factors such as the required status of the course and students enthusiasm going into the class are engaged by the traditional wording used to assess the course retake rate. In contrast, such pre-course considerations are not significantly salient when using the revised question.

6

Turning to the second section of Table 3 three factors are included. These are students’ ratings of the level of course difficulty, the final grade they expect and their rating of the value of the course as a learning experience.

7

The course difficulty ratings are not significantly related to the log odds of retaking the course using either the traditional or revised question. However the sign in both instances is as expected; the more difficult the course is thought to be the less likely the student is to indicate a willingness to retake it. The conventional (and conveniently calculated)

Wald statistic suggests that each is marginally significant (.10 and .08) but the more rigorous approach of calculating the likelihood ratio with and without the variable as recommended by

Menard (2010:99) places their significance at .42 and .41 respectively. In either case, level of difficulty is not differentially engaged by one or the other forms of a question.

8

The results for expected grades tell a different story. The students’ anticipated final grade is not significantly related to (log odds of) the answers given on the traditional retake question.

Although the sign is in the expected direction, the coefficient is only slightly larger than its standard error. Moving to the results for the revised question on right-hand side of the table, the

6 A preliminary effort at path analysis suggests that the effect of required status on the traditional retake question may be mediated by initial levels of enthusiasm.

7

In selecting items for inclusion in the models we sought to balance concerns over bias and efficiency. In particular, with our relatively small sample size, it important not to include items which contribute little to the explanatory power of the model as gauged by the correct classification of cases or measures of pseudo-R 2 . Inclusion of such variables inflates the standard errors of the predictors reducing the efficiency of the model . A number of such inefficient factors could have been included here including questions about the workload, the quality of the readings as well as ratings of the tutorials and laboratory sessions that are part of the course. All are insignificant as predictors; none contribute to the explanatory power of the model. Level of difficulty stands-in for them in this model. Adding them or eliminating it does not substantially affect the model or the conclusions drawn here.

8

This makes sense in that neither the traditional nor revised question evokes considerations of matters such as degree of difficulty, workload or time commitment entailed by the course. It would be possible, of course, to design a question that does cue such considerations by asking respondents to consider how much they had to do for the class, how many Saturday afternoons and evening they had to devote to the course, etc.

7

coefficient is not only positive but statistically significant. This indicates that the revised question engages students’ assessment of how well they are likely to do in the class in determining whether they would retake it. This makes sense, of course, because the revised question asks about their future academic and work life for which one's grade could make a difference. And the size of the coefficient is nearly twice as large as it is for the traditional question. So expected grades positively distinguish the new question from the old. This difference suggests that in responding to the revised question, students draw to a greater extent upon their experience in the course than they do in responding to the old question wording.

The final item in Table 3 shows the results for the item used by students to assess the value of the course is a learning experience. On both sides of the table the coefficient is positive and significant. The more that students see the course as a positive learning experience the more likely they are to say that they will retake it irrespective of which question we use. Even so the coefficients are of different sizes despite having very similar standard errors; the one on the right is approximately 20% larger. So while both forms of the question engage student considerations of the course as a learning experience, the revised question asking about their future does so to a greater extent.

Considered together, the latter three sets of coefficients presented in Table 3 indicate that incourse factors are engaged to a greater extent by the revised course retake question than by the traditional one. Considerations such as how well one is doing in the course and the quality of the course as a learning experience are elicited more by the revised question asking about one's future will.

Looking at the results as a whole we see that the traditional item frames the retake question in a way that primes pre-course considerations and discounts in-course considerations. The revised question does just the opposite; it primes in course considerations and not pre-course ones. In this respect we regard the revised question as superior to the traditional one.

It is not our intention to suggest that the revised question wording is ideal. It clearly frames the retake question in terms of the student's future academic and work life. We think this is improvement upon the traditional wording in so far as it focuses students’ considerations to a greater extent on in-course considerations when they are asked whether they will retake the course. In contrast, the traditional question wording focuses students attention upon pre-course considerations such as the required status of the course and their degree of enthusiasm upon registering for the course.

These results signal a significant degree of bias in the traditional question. It substantially overestimates the number of students who say that they would not retake the course because the question reminds them of the required status of the course and primes their original level of enthusiasm regarding the class. In contrast, the revised question does not significantly engage these considerations. Instead, students reply to the revised question in terms of their anticipated course grade and the value of the class as a learning experience. These are factors that are likely to be important in future academic and work endeavors. Understandably, they are evidently cued by the wording of the revised question.

8

5. Discussion

In the opening paragraph of his book entitled Don't Think of an Elephant , George Lakoff (2004:

3) tells us that the first thing he does in his Cognitive Science 101 course at Berkeley is to ask his students not think of an elephant. The point of the exercise is that his students think of an elephant despite his instructing them not to.

One of the tenets of cognitive linguistics is that our brains do not automatically process negatives

(Kaup et al, 2006); questions including them must initially be subconsciously processed in the positive in order to fix their meaning. Such is apparently the case with the traditional retake question. In order to parse the meaning of this question asking our respondents to disregard the required status of the course, students have has no choice but to make the association and think of the very concept being linguistically negated. In using the traditional question to assess the retake rate we are essentially asking our students not to think of an elephant. The elephant in the room for most students is that the course is required and one they did not anticipate enjoying.

And this is what they inadvertently are asked to think about when faced with the traditional retake question. Essentially it primes thinking about the required status of the course and thereby frames the answers given by the students in just those terms (Kahneman, 2011).

The wording of the traditional question was, no doubt, introduced with the idea of eliminating or at least reducing, bias against required courses. Unfortunately, good intentions sometimes go awry. Ironically, the traditional question introduces bias into the course evaluation by drawing attention to the required status of the course and cuing the initial level of enthusiasm with which students entered the class. Asking students to disregard their need for the course to meet program requirements produces precisely the opposite of what was intended.

Again, we do not presume that our reframing of the retake question in terms of future academic and workplace considerations represents an unbiased question. Arguably it is a better question than the traditional one, but it nevertheless frames the question in a particular way. Perhaps a simpler alternative should be considered such as, " Would you recommend that others take this course?

"

9

Works Cited

Alvarez, R.M. 1992 “Methods Madness: Graduate Training and the Political Methodology

Conferences.”

The Political Methodologist, 5(1): 2–3.

Barke, D. R., Tollefson, N. & Tracy, D. B. 1983. “Relationship between course entry attitudes and end-of-course ratings.” Journal of Educational Psychology, 75 (1): 75-85.

Barth, M.M. 2008. “Deciphering student evaluations of teaching: A factor analysis approach.”

Journal of Education for Business , 84(1): 40-46.

Braskamp, L., and J. Ory. 1994. Assessing faculty work: Enhancing individual and institutional performance. San Francisco: Jossey-Bass.

Chong, D., & Druckman, J. N. 2007. “Framing theory.”

Annual Review of Political Science , 10:

103–126.

Clayson, D.E. 1999. “Students’ evaluation of teaching effectiveness: Some implications of stability.”

Journal of Marketing Education, 21 (1), 68-75.

Coleman, Charles and Cynthia Conrad. 2007. “Understanding the Negative Graduate Student

Perceptions of Required Statistics and Research Methods Courses: Implications for Programs and Faculty.” Journal of College Teaching & Learning , 4(3): 11-20.

Darby, J. 2006. “The effects of elective or required status of courses on student evaluations.”

Journal of Vocational Education & Training , 58(1): 19-29.

Druckman, J. N. 2004. “Political preference formation: Competition, deliberation, and the

(ir)relevance of framing effects.”

American Political Science Review , 98: 671–686.

Francis, Clare. 2011. “Student Course Evaluations: Association with Pre-Course Attitudes and

Comparison of Business Courses in Social Science and Quantitative Topics.”

North American

Journal of Psychology , 13(1): 141-154.

Frey, P., D. Leonard, and W. Beatty. 1975. “Student ratings of instruction: Validation research.”

American Educational Research Journal, 12 (4), 435-447.

Gal, Iddo and Ginsburg, Lynda. 1994. “The Role of Beliefs and Attitudes in Learning Statistics:

Towards an Assessment Framework”

Journal of Statistics Education , 2(2). http://www.amstat.org/PUBLICATIONS/JSE/v2n2/gal.html.

Gamson, W. A., & Modigliani, A. 1987. “The changing culture of affirmative action.”

Research in Political Sociology, 3 : 137–177.

Green, S. B. 1991. “How many subjects does it take to do a regression analysis?”

Multivariate

Behavioral Research , 26: 499-510.

10

Heise, D. R. 1979. Understanding Events: Affect and the Construction of Social Action. New

York, Cambridge University Press.

Heise, D. R. 2002. “Understanding social interaction with affect control theory.” In B. J. and M.

Zelditch (Eds.). New directions in sociological theory . Boulder, CO: Rowman and Littlefield.

Higgins, E. T. (1996). Knowledge activation: Accessibility, applicability, and salience. In E. T.

Higgins, & A. Kruglanski, Social psychology: Handbook of basic principles (pp. 133-168). New

York: Guilford Press.

Johnson, Jeffery. 2011. “What Do We Measure? Methodological Versus Institutional Validity in

Student Surveys.” Presented at the Association for Institutional Research Annual Forum,

Toronto, ON. http://ssrn.com/abstract=1890722.

Kahneman, Daniel. 2011. Thinking Fast and Slow . Toronto: Doubleday Canada.

Kaup, B., Ludtke, J., & Zwan, R.A. (2006). Processing Negative Sentences with Contradictory

Predicates: Is That Door That Is Not Open Mentally Closed? Journal of Pragmatics , 38, 1033-

1050.

Kinder, D. R. (2003). “Communication and politics in the age of information.” In D. O. Sears, L.

Huddy, & R. Jervis (Eds.), Oxford Handbook of Political Psychology (pp. 357–393). New York:

Oxford University Press.

Lakoff, George. 2004. Don't Think of an Elephant: Know Your Values and Frame the Debate.

White River: Chelsea Green.

Landis, Larry M. 1977. “Required/Elective Student Differences in Course Evaluations.”

Teaching Political Science , 4(4): 405.

Lewis-Beck, M. 2001. “Teaching Undergraduates Methods: Overcoming “Stat” Anxiety.” The

Political Methodologist, 10(1): 7–9.

Lovell, G. D. & Haner, C. F. 1955. “Forced choice applied to college faculty rating.”

Educational and Psychological Measurement, 15: 291–304.

Marsh, H. 1987. “Students’ evaluation of university teaching: Research findings, methodological issues and directions for future research.”

International Journal of Educational Research, 11 (3),

253-388.

Marsh, H. and M. Dunkin. 1992. “Students’ evaluations of university teaching: A multidimensional perspective.” In J.C. Smart (Ed.),

Higher education: Handbook of theory and research (Vol. 8, pp. 143–233). New York: Agathon Press.

11

Menard, Scott.2010. Logistic Regression: From Introductory to Advanced Concepts and

Applications . Thousand Oaks: Sage.

Nelson, T. E., & Kinder, D. R. 1996. “Issue frames and group-centrism in American public opinion.”

Journal of Politics, 58: 1055–1078.

Petchers, Marcia and Julian Chow. 1988. “Sources of Variation in Students’ Evaluations of

Instruction in a Graduate Social Work Program.”

Journal of Social Work Education , 24(1): 35-

42.

Pohlmann, J. T. 1975. “A Multivariate Analysis of Selected Class Characteristics and Student

Ratings of Instructions.”

Multivariate Behavioural Research, 10(1): 81–91.

Pounder, J. 2007. “Is Student Evaluation of Teaching Worthwhile? An Analytical Framework for

Answering the Question.”

Quality Assurance in Education, 15(2): 178–191.

Scherr, F., & Scherr, S. 1990. “Bias in student evaluations of teacher effectiveness.”

Journal of Education for Business, 65 (8): 356–358.

Simpson, P.M., and J.A. Siguaw. 2000. “Student evaluations of teaching: An exploratory study of the faculty response.”

Journal of Marketing Education, 22 (3), 199–213.

Tatro, C. 1995. “Gender effects on student evaluations of faculty.” Journal of Research and

Development in Education, 28(3), 169-173.

Wachtel, H. 1998. “Student evaluation of college teaching effectiveness: A brief review.”

Assessment and Evaluation in Higher Education, 23: 191–212.

Zaller, J. R. 1992. The Nature and Origins of Mass Opinion. New York: Cambridge University

Press.

12

Appendix: Question Wording and Variable coding

Question Wording

Traditional Re-Take Question:

“Considering your experience with this course, and disregarding your need for it to meet program or degree requirements, would you still have taken this course?”

Revised Re-Take Question:

“ Considering the value of course in preparing for future study and future work, would you still have taken this course?

Required Status:

“ Status of the course for you: ”

Enthusiasm:

“ Your level of enthusiasm to take this course at the time of initial registration: ”

Level of Difficulty:

“ Compared to other courses at the same level, the level of difficulty of the material is… ”

Expected Grade:

“ Your expected grade in this course: ”

Learning Experience:

“ The value of the overall learning experience is… ”

ItemCoding

1 (Yes)

0 (No)

1 (Yes)

0 (No)

4 (Program Requirement)

3 (Selected from Required List in List in a Program)

2 (Breadth Requirement)

1 (Optional)

1 (Low)

2 (Medium)

3 (High)

1 (Very Low)

2 (Low)

3 (Below Average)

4 (Average)

5 (Above Average)

6 (High)

7 (Very High)

1 (<50)

2 (50-59)

3 (60-69)

4 (70-79)

5 ( ≥80)

1 (Very Low)

2 (Low)

3 (Below Average)

4 (Average)

5 (Above Average)

6 (High)

7 (Very High)

13

Download