AACP 2009 Poster

advertisement
STUDENT COURSE EVALUATIONS WITH MULTIPLE INSTRUCTORS. DOES RESPONDENT FATIGUE
AFFECT INSTRUCTOR EVALUATIONS?
Sara E. Renzi, Daniel A. Brazeau, Gayle A. Brazeau, Jennifer M. Hess, and Jason N. Adsit
School of Pharmacy and Pharmaceutical Sciences, University at Buffalo, Amherst NY 14260
Abstract
Hypothesis
Objective: As multi-instructor courses become more commonplace, instructor
surveys for individual courses become lengthier. Studies suggest respondent
fatigue can become a problem in these evaluations with a change in assessment as a
function of survey length.
This study investigated the relationship between
individual instructor scores and instructor order using on-line evaluations over a
period of four years, for four pharmaceutics courses with 23 different instructors.
 Instructor evaluations may be adversely effected depending on their order on evaluation
forms, perhaps caused by survey fatigue with long on-line surveys. Specifically, overall
faculty evaluations scores may decrease as a function of faculty order during on-line
surveys.
 Survey fatigue as measured by the percent of students responding and response
variance will decease with the length of the on-line survey.
Methods: Instructor evaluations from four courses offered from 2005 to 2008 with
three or more faculty members were examined. Each on-line survey had five
introductory course questions followed by ten identical instructor specific questions.
Thus for any on-line survey there were between 30-70 instructor specific questions.
The percent response and mean score for each question as a function of instructor
order were tested to assess whether student’s became fatigued during completion of
long evaluations. Fatigue was measured by the decreased percent response,
changes in mean score and changes in the variance of the mean scores.
Methods
Results: As expected there were significant differences among faculty members in
mean scores indicating differences in teaching abilities. However, evaluation order
does not appear to play a role as there were no statistically significant differences in
the percent response, mean or variance of the scores.
Results – Regression Analysis
 Four courses (see below) were used between the years of 2005-2008. Each course had
three or more faculty lecturing in the course. Each evaluation had five course based
questions and ten instructor questions for each instructor.
 Evaluations had minimally 30 instructor questions with some evaluations having as many
as 70 instructor based questions for a total of 40-80 questions per course.
 Statistical differences among instructor scores and percent response were calculated
using ANOVA with p < 0.05 considered statistically significant. Regression analysis was
use to assess the relationship between instructor scores and order on the evaluation
forms.
Implications: The data suggests for instructor evaluations of moderate length,
individual faculty scores are not affected by their order in on-line evaluations. We
observed no respondent fatigue.
Background
 On-line faculty and course evaluations have become standard for every course and
instructor in most institutions with the demand caused by providing feedback to
faculty members and for assessment in our programs.
 The process of on-line evaluations can be lengthy for respondents given there may
multiple instructors each being evaluated using the same sets of questions and a
course survey, there may be many separate course and instructor evaluations or
there may be many surveys for a given course and instructors throughout a
semester.
 Survey fatigue, defined as a decrease in respondent participation due to multiple
surveys or very long surveys has been suggested to occur in students and other
groups (1-3) and can affect the quality of the survey findings.
Results – Representative Plots for PHC 312 and PHM/PHC 311
Results - Instructor Scores and Response Rates
Analysis of Variance: Difference among Instructors – Mean Instructor Scores
df
F
p
Instructors
219, 520
8.45
p < .050
Analysis of Variance: Difference in percent response – Mean Instructor Response Rate
df
F
p
Instructors
219, 520
0.000
1.00
Discussion and Conclusions
 In this initial review, findings show there are differences in the scores between
different instructors, but the order of instructors during these on-line surveys
does not affect the scores.
 There does not seem to be any indication of survey fatigue given that the
number of student responses does not decrease with survey length and the
variance does not change with longer on-line surveys.
 The results from the current study are confounded given there is no way to
randomize the instructor order during a given on-line survey. A limitation in
many of the current on-line course and instructor evaluation programs is that
they do not permit the randomization of instructor order during the survey
process.
 There is no data to date to provide any suggestion of survey fatigue with
multiple course and instructor evaluations at the end of each semester or
throughout the semester.
 This warrants further investigation given the importance of course and
instructor evaluations for faculty portfolios and in our assessment programs.
References
1.
2.
3.
Sedlmeier, P. (2006) The role of scales in student ratings. Learning and Instruction, 16, 401415. Retrieved February 6, 2009 from EBSCOhost database.
Porter, S.R, Whitcomb, ME and Weitzer, WH. (2004) Multiple surveys of students and survey
fatigue. New Directions for Institutional Research, 121, 63-73.
Sinickas, A. Finding a cure for survey fatigue, Available at
http://www.sinicom.com/Sub%20Pages/pubs/articles/article93.pdf, Accessed July 15, 2009
Download