Simmons response to Rucki

advertisement
Memo to SRI Task Force, February 13, 2014
To:
SRI Task Force
From: Jim Simmons, Co-Chair
Re:
Thoughts about SRI Numbers and Use of Student Comments in Faculty Reviews
I want to express my thanks to Sheila Rucki for the work that she did in preparing her February 3, 2014
SRI Task Force Proposal which was discussed at the meeting of February 4, 2014. It is detailed and
thoughtful.
I spoke briefly at the end of our meeting about several concerns I have about Sheila’s proposal, and
wanted to write to all of you in advance of our next meeting to spell out these concerns about her
proposal, and in fact, my opposition to it, in anticipation of our next meeting.
As you know, Sheila’s proposal is that written comments should either not be provided in review
materials for summative purposes either at all, or until a review process can be implemented which
would provide a method to attempt to guard against extreme outliers. She advocates the use of student
comments for formative purposes by the faculty member, chairs, and deans.
My current position is as follows: for the following reasons, I believe it is helpful and useful to include
student comments for all reviewers at all levels of review for summative purposes, subject to
restrictions that I think this Task Force can meaningfully create. Specifically,
1. Having served on department, school, and Faculty Senate RTP Committees, I have found the
comments to be data, when added to all the other pieces of information that our former dossier and our
current portfolio systems provide. An early consensus of our Task Force, which I do not believe has
changed, is that the two SRI questions are good as they stand. Another early consensus that I think still
exists is that student comments should be gathered; our issues are how to use them. But, I think that
student comments provide information that is different in kind from the SRI numerical scores. I think
remarks by members of our Task Force in our various meetings have confirmed that they have had the
same experience that I have had in finding them useful for obtaining a more complete picture of a
faculty member.
2. I think there would be, for lack of a better term, a “due process” problem if only the “supervisory
chain” reviewers (chairs and the dean, and possibly the Provost, although Sheila did not include her in
her proposal) have access to student comments, even if only for formative purposes, while department,
school, and Faculty Senate Committee members do not have access to the comments. In short, some
will have access to more information about a candidate than others will, yet all reviewers within the
system are tasked with providing evaluations on faculty in the areas of teaching (including advising),
scholarly activities, and service. I think there could be a possible legal problem in the event of a negative
outcome for a faculty member’s submission in a system that provides some information to some
reviewers but not others.
3. As a practical matter, I think we can all acknowledge the folly of the commonly heard instruction in
courtrooms by judges: “The jury will disregard that remark.” In a similar vein, how can we expect
chairs, deans, and perhaps the Provost to use student comments that they have read for formative
purposes only, and then put their thoughts about those comments aside when doing a summative task,
such as writing a letter to recommend or not recommend retention, tenure, promotion, or post-tenure
review?
4. I think many of the concerns expressed in the Faculty Senate Letter of October 7, 2013 are valid, and I
think a valuable contribution of this Task Force would be to recommend a meaningful set of restrictions
about the availability and use of student comments that address many of the issues raised in that letter.
For instance, I think that we can recommend some or all of the following:

Comments can be uploaded to Digital Measures in a secure fashion yet to be determined.

Comments can be electronically removed from everyone’s view of a portfolio in Digital
Measures at a pre-determined time, such as the conclusion of the particular review cycle for
which the portfolio was submitted. For instance, once a person is retained, say, after a third
year review, all comments can be permanently deleted or blocked from Digital Measures. The
same could apply who has achieved promotion or tenure, or undergone a post-tenure review.

Perhaps the Task Force could recommend that student comments from just one class be
uploaded each semester, not all comments for all classes. The literature seems to suggest that
the larger the class, the more representative the student comments would be. Therefore, our
recommendation might be to upload and make available comments from the faculty’s largest
class each semester (or some other agreed-upon standard).

Statements can be incorporated in the Faculty Handbook that attempt to spell out how
reviewers at all levels can sensibly review of the comments. The Task Force, at least now, seems
interested in developing a specific statement cautioning about the possibility of personal bias
items in the comments (race, ethnicity, national origin, non-English first language issues, sexual
orientation, gender, and gender identity). A broader set can be developed and included in the
Handbook. The Appendix to this memo contains some direct quotes and some paraphrased
principles and cautions that that other schools have posted on their websites for both faculty
themselves and for reviewers to keep in mind then looking at both SRI numbers and student
comments. My belief is that our Task Force can develop a useful set of guidelines for reviewers
in our system to use incorporating some of the principles listed in the Appendix, plus some we
develop.
Thank you for taking the time to read this, and if you have written comments, please “reply to all.” I
trust we will continue our lively and interesting discussions on many of these issues at our upcoming
meetings.
Appendix: Some Statements on Other Schools’ Websites about SRI Numbers and Student Comments

Look for patterns over time by comparing multiple courses across multiple semesters to form
generalizations about teaching effectiveness. (University of Connecticut)

Remember that SRIs or student comments are not random and therefore may not be
representative of the entire class. (University of Connecticut)

Do not over-interpret small differences in median ratings. (University of Connecticut)

Use multiple sources of data when evaluating faculty. (University of Connecticut)

Inform all reviewers of the possibility of gender bias and that bias may influence evaluations and
rewards if that potential bias is unexamined or unchallenged. (Indiana University, Bloomington)

The university should require departments to review regularly their tenure, promotion, and
salary-setting guidelines for unintended gender bias and similarly examine university evaluation
procedures for the same. (Indiana University, Bloomington)

The university should increase accountability for eliminating gender bias by requiring decisionmakers to communicate their decisions and defending them, such as compiling an publishing an
annual review of gender equity benchmarks, such as salary, tenure rates, time in rank,
proportion of chairs and other administrators who are female, monitor individual departments
for increasing gender equity, regularly ask chairs to demonstrate whether women’s and
minorities’ salaries are in line with their accomplishments. (Indiana University, Bloomington)

Review committees should be as gender-integrated as possible. (Indiana University,
Bloomington)

When evaluating a faculty member, reviewers should imagine a scenario in which the person
being evaluated was of another gender, and ask whether the evaluation would be the same.
(Indiana University, Bloomington)

The university should provide training to systematic, well-designed research that documents the
existence of bias processes. (University of Pennsylvania)

Hold reviewers accountable for decisions, because individuals who know they will be required to
justify their decisions (particularly to an impartial higher authority) tend to engage in more
complex thought processes when making evaluations. This helps people avoid making the kind
of snap judgments that can lead to applying stereotypes when making decisions. (University of
Pennsylvania)

Take the context and characteristics of your course into account. Research shows that student
evaluations often are more positive in courses that are smaller rather than larger, and elective
rather than required. Also, evaluations are usually more positive in courses in which students
tend to do well. (Vanderbilt University)

Accepting the positive and interpreting the negative with caution can help you maintain balance
in your view of the overall tone of the comments. Because student evaluations are anonymous,
positive comments are usually genuine, and you should not minimize their importance.
Extremely negative comments, on the other hand, can reflect pressures students feel and their
dissatisfaction with a broad range of educational issues, over and above your teaching, so don't
overemphasize them. (Princeton University)

When you review the open-ended questions at the end of the evaluation report, keep in mind
that the most useful comments are those that are specific and relevant. Sometimes the most
glowing comments (“I loved the professor” or “the professor was awesome”) make you feel
good, but do not contain any information that you can use to identify your strengths as an
instructor. By the same token, you will sometimes receive negative comments that are vague or
seem personal (“The homework assignments were stupid” or “This class was a waste of time”).
The anonymity of the online evaluations may encourage some students to write harsh
comments or to not take the process seriously. (Washington University, St. Louis)

Rather than dwell on comments that are either highly positive or highly negative, look for the
comments that help you identify specific aspects of the course that worked well and specific
areas you could improve upon. In addition, try to find the rationale behind the students'
responses. For example, if they did not like the papers, why not? Did they find the topics too
difficult to tackle? Or did they need additional guidance on what was expected on each paper?
Or, if they liked your teaching, did they appreciate your engaging lecture style, the time you
took to give them thoughtful feedback on their work, or the in-class activities you designed?
(Washington University, St. Louis)

Read, organize, and compare the comments from each question separately. That is, if you have
more than one open-ended question on your evaluation, begin by reading the responses to just
the first question on all the evaluations. Then go back and read the responses to the second
question, then the third question, etc. This approach permits you to focus on one topic at a
time. (Syracuse University)

Research indicates that the students who are very satisfied or very dissatisfied generally provide
written comments. (Syracuse University)

It is helpful to determine the proportion of negative to positive comments for interpretative
purposes. This will assist you in determining if the comments are representative of the entire
class or a small minority of students. (Syracuse University)

Comments that reflect positively on your teaching effectiveness can usually be considered
genuine. Since the course evaluation is anonymous, students do not usually write positive
comments unless they mean them. (Syracuse University)

Since students’ anonymity is protected on student ratings, students may write negative
comments that range from sarcastic to vicious. Obviously, not all of these comments are
constructive. Pressures unrelated to you or your course may also underlie some of these
comments. Keeping this in mind may help to limit overreaction to certain comments. (Syracuse
University)

Negative comments need to be interpreted with caution. Normal human behavior often causes
one to take the negative comments to heart, regardless of how small the number. (Syracuse
University)

Individual strategies for analysing student feedback (all from the University of Limerick, Ireland):
o
Control your defence mechanisms. Ask yourself: What kinds of reactions am I having to
this feedback and what is it likely to make me do in future? Make explicit the implicit
emotions to which the feedback is giving rise.
o
Analyse the source of your students’ reactions in a way that sheds light on any issues
and problems that have been identified. Ask yourself: What are the reasons behind both
the positive and negative feedback provided by the students? Whether or not you can
answer these questions easily, try to pursue information via other methodologies (e.g.
focus groups; one-to-one interviews, facilitated by objective information gatherers).
o
Remember to focus just as assiduously on the reasons behind positive as well as
negative feedback, keeping in mind that it can be just as professionally damaging not to
know why students think you have done well, as it is not to know why they think you
have done badly.
o
Work hard not to under-react or over-react to information that you receive via SET
feedback. Ask yourself: What are the changes that would enhance student learning,
versus the ones that would have neutral or negative impact on learning? Try to
differentiate between the implications of different changes implied by the feedback.
o
Divide the issues raised by students into actionable and non-actionable categories. Ask
yourself: What aspects of this feedback can I do something about? What aspects of this
feedback require a wider institutional, administrative or resource based reaction?
Integrate these categories into your teaching enhancement strategy. Simply put, it’s
important that you don’t justify anything identified by your students that that is
unjustifiable about your current teaching approaches, but equally that you don’t allow
yourself to become the scapegoat for issues that clearly need to be tackled at an
institutional level.
o
Communicate with students before and after their provision of feedback. Ask yourself:
how can I use the SET system to improve communication and to create constructive
dialogue with my students ? Do not appear to ignore students’ participation in the SET
system. Register with them that you are aware of their impending participation in the
feedback system and encourage them to take part as honestly and constructively as
possible. And when the results come in, devote a short session of one of your lectures to
presenting the summary data and explaining to your students what you will and will not
be doing as a result of the feedback they have provided. Student satisfaction levels can
be significantly increased via this kind of non- defensive, honest and reasonable
communication. Ensure that they know that no negative or recriminatory outcomes will
be associated with their participation.
o
Do not make the simplistic assumption that all positive responses are related to good
teaching and all negative responses are related to bad teaching. Ask yourself: What
parts of this feedback most robustly indicate where my teaching strengths and
weaknesses lie? As outlined earlier in this chapter, much of the literature on SET’s
cautions against the risk of giving rise to negative learning outcomes in the pursuit of
positive ratings. Some negative student reactions to your teaching may be related to a
vital part of their learning journey. This negative feedback can provide the basis for an
enhanced dialogue to help secure higher levels of student motivation and commitment.
Also be strict about assuming that positive ratings are always related to good teaching.
As outlined earlier, the literature shows that there are moderators of student
satisfaction that relate to other factors such as disciplinary background, class size,
student demographics and timing of feedback.
o
Remember that small changes can have big effects. Ask yourself: What initial small
changes can I make based on the feedback that I have received that might have
immediate and positive effects on my students’ learning experiences in this learning
setting? While not all changes implied by the feedback will be easy or short term, it’s a
good idea to identify some ‘low lying fruit’. Most participants in a SET system can
identify one or two small changes that are relatively easy to effect and that can indicate
to students that you have heard their voices and are registering their feedback through
immediate action. This can create positive momentum for more fundamental or
strategic changes to your teaching styles and approaches.
o
Develop a teaching enhancement strategy that takes into account the SET feedback. Ask
yourself: what are my long term teaching goals and how can this feedback help me to
achieve them? Within a short time of receiving the feedback, allocate a dedicated
period of time in your schedule to develop a longer term teaching enhancement
strategy. This strategy might include plans to receive more feedback later in the
semester or year, specific professional development interventions that you’d like to
avail of, more communication with other key members of your teaching network (heads
of department, IT specialists, researchers in your field, librarians, student advisers, study
skills experts and so on), and enhanced student assessment strategies.
Download