Faculty Senate Minutes February 5, 2001 Present: K. Bartanen, W. Breitenbach, T. Cooney, J. Elliott, W. Haltom (chair), M. Jackson, S. Kukreja, J. McGruder, G. Tomlin, K. Ward, R. Worland Visitors: Randy Nelson, Keith Maxwell Haltom called the meeting to order at 4:09 p.m. Corrections were made to the draft version of the minutes for the meeting of January 22, after which the minutes were approved as corrected. Announcements Haltom announced that Suzanne Barnett, Joel Elliott, Sunil Kukreja, Mike Sugimoto, and Rand Worland have been appointed by the Senate to replace the five Senators who are on leave in Spring 2001. Approval of nominees for honorary degrees Cooney distributed a list of three individuals who have been nominated to receive honorary degrees at Commencement in May 2001. He summarized the process and criteria that led to their selection. He reminded the Senate that the names are to remain confidential until the nominees have agreed to accept the honor. ACTION: Ward M/S/P to approve the slate of nominees for honorary degrees. Discussion of the report of the Ad Hoc Committee on Student Evaluation of Teaching After distributing copies of the report of the Ad Hoc Committee and describing the report’s journey through the Faculty Senate and the Professional Standards Committee, Haltom introduced Randy Nelson, the Director of Institutional Research, who had been asked by the Senate to analyze the report. Nelson stated that he had had experience at other institutions working with course evaluation data. He said that the recommendations made by the Ad Hoc Committee seemed to be solid and reasonable, consistent with recommended practice, though he noted that these recommendations did not necessarily flow out of the data generated by the Ad Hoc Committee’s survey of the faculty. Reading the report and studying the survey and data led Nelson to conclude that the most important issue was not the quality of the survey instrument or the validity of the data derived from it but rather the faculty’s perception of the fairness or unfairness of the evaluation process. In his judgment, the survey itself was fine as an instrument. What struck him in analyzing the data was the huge standard deviation, revealing a wide variation of response. In particular, the humanities faculty were consistently more negative in their response to the evaluation process. Turning to the recommendations made by the Ad Hoc Committee, Nelson remarked that adopting Recommendation 8 (hiring an external consultant “to conduct a comprehensive validity study of the student evaluation of teaching process”) would be one way but not the only way to improve the faculty’s perception of the fairness of evaluations. Recommendation 6 (the instructor evaluation form should ask students to indicate their “grade expectations, motivation and prior interest, and workload”) is, in Nelson’s judgment, a reasonable thing to do; he suggested that the university develop a database to study the effect of these factors on students’ assessment of their instructors. Cooney noted that the development of such a database would require the faculty to agree collectively to permit the Institutional Research staff to study the forms, which are currently confidential. Recommendation 2 calls for different instructor evaluation forms for feedback and appraisal. Nelson said that at Texas A&M he had developed a system that allowed instructors to select additional questions from a menu of optional questions; students’ answers to these optional questions were not forwarded to the university’s evaluation committee. Another approach would be to have individual instructors use their own informal surveys during the semester to get feedback from students. According to Nelson, research indicates that students’ responses on evaluation forms are not affected by their knowledge of whether the forms will be used for feedback or appraisal. Nevertheless, he noted, it is a problem if the faculty feel that they don’t dare seek feedback by asking “hard questions” in the instructor evaluation forms; once again, the real problem seems to be the perception of unfairness in the evaluation process, not the deficiencies of the evaluating instrument. Cooney commented that nothing currently precludes faculty from supplementing the university’s evaluation form with an individual form designed for feedback. In non-mandatory evaluations, an instructor is free to substitute an entirely different instructor evaluation form. With the approval of the Professional Standards Committee, a department may create an official substitute for the instructor evaluation form. Ward suggested that the faculty need to be reminded that these options are available. Nelson said that some institutions, like the University of Washington, have created different forms for different types of classes because they recognize that one form will not fit all courses. Jackson noted that Nelson’s assessment of the Ad Hoc Committee’s report was based on his analysis of the recommendations, the survey administered to faculty, and the histograms and summaries of responses to the survey questions. He asked Nelson if there would be any benefit to his looking at the full data set; Nelson thought not. Jackson asked if any of the Ad Hoc Committee’s recommendations were supported by the data. Nelson replied that most of the recommendations are supported by the data and all of the recommendations are grounded in good practice, but that some of the recommendations (especially the call for an external consultant and the call for separate forms for feedback and appraisal) do not necessarily follow from the results of the survey. Cooney asked about Recommendation 7 (“Serious consideration should be given to establishing norms for each criterion of effective teaching we measure. The norms should be determined for each of the course types, i.e., small, large, team-taught, etc.”). He wanted to know if there is any research about establishing norms for various class types, and he wondered whether numerical norms are at odds with an openness to different teaching approaches. Nelson replied that at Puget Sound class sizes are not that variable; he believed it might be more useful to look at other kinds of differences, such as those between science classes and non-science classes. Cooney noted we would need a large sample in order to set meaningful numerical norms. Nelson agreed, and stressed the importance of using multiple sources and types of information when evaluating teaching performance. Once again, he observed that the real problem revealed by the Ad Hoc Committee’s report was the faculty’s perception of the unfairness of the evaluation process. The university needs to do a better job of explaining how the information is used. Right now, the people most familiar with the evaluation process (the senior faculty) are most comfortable with it. Ward asked Nelson about the negative responses to survey questions by humanities faculty. Nelson speculated that there was an unhappy group that gave consistently negative answers, either because something had happened that had soured them on the evaluation system or because some humanities faculty dislike quantitative methods of evaluation. Elliott (a scientist) raised a question about the value of numerical data; the numerical rankings that appear on instructor evaluation forms do not always match the verbal comments on the same forms. Cooney said that members of the Faculty Advancement Committee often conclude that the numbers on instructor evaluation forms do not tell the most important things about a file. Haltom remarked that at Puget Sound the numbers have little comparative value because we have no collection of data for the entire faculty. Nelson said that it is possible to purchase off-the-shelf norms from other institutions. Cooney recalled the howls of protest that erupted twenty years ago when Puget Sound used such a database developed by the University of Kansas; the experiment was quickly abandoned. Without advocating the idea of establishing norms, Breitenbach said that numerical data could be collected in a confidential fashion if we conducted instructor evaluations electronically. An electronic system of evaluations would also solve other problems: the use of class time for evaluations, the weariness and terseness of students who fill out many evaluation forms, absenteeism on the day evaluations are administered, illegible forms, and secure storage of completed forms. Nelson and Kukreja pointed out difficulties of getting voluntary participation in electronic evaluations; Cooney pointed out legal obstacles to compelling participation. Haltom thanked Nelson for his work and his presentation. Senators concluded the meeting with wide-ranging reflections about the evaluation system. Cooney and McGruder advocated a question on the instructor evaluation form asking students how hard they worked in a course. Breitenbach wondered about the relationship between grades and students’ evaluations of instructors; he also wondered if grades went up in semesters when faculty faced mandatory evaluations. Cooney said the Senate should consider three things: what changes should be made on the instructor evaluation form, what changes should be made that would require amending the Faculty Code, what information Randy Nelson could provide to the faculty about current research in the subject of evaluating teaching. McGruder suggested a telephone survey of students to find out if they knew that instructor evaluation forms were used for both feedback and appraisal. Jackson wondered why the faculty had dropped the question about expected grade when creating the current form. Tomlin did not want to resurrect the old question about expected grade; he preferred a question about how hard students had worked in a course. Maxwell (chair of the Ad Hoc Committee) said that research has shown that relative grade, not absolute grade, is a significant factor in students’ responses; he suggested a question asking if students thought a course would raise or lower their g.p.a. Elliott asked how the evaluation process handled “outliers”—extreme statements on instructor evaluation forms that did not match the general sense conveyed by the class as a whole. Cooney, McGruder, and Haltom, speaking as veterans of the Faculty Advancement Committee, reassured Elliott that the FAC read such comments with skepticism. Bartanen commented that Elliott’s question revealed a need for conversations with junior faculty about the evaluation process, so that they could get a sense of how instructor evaluation forms are used and what the university-wide norms are for teaching. Haltom observed that senior faculty as well as junior faculty would benefit from hearing what FAC veterans have to say about the evaluation process. Cooney said that it would desirable to conduct such conversations every couple of years. Maxwell agreed, noting that the main finding of the Ad Hoc Committee was that faculty distrusted the evaluation process and wanted reassurance that evaluation data were fairly interpreted and used. McGruder declared that it should not be the case that the only faculty who trust the evaluation system are the current and former members of the FAC. At 5:20 p.m. Breitenbach M/S/P to adjourn. Respectfully submitted, William Breitenbach