Peer Assessment of Professionalism: Class of 2013 Report

advertisement
1
Peer Assessment of Professionalism: Class of 2012 Report
Purpose
The purpose of introducing peer assessment of professionalism is to provide students with a
formative learning experience whereby they: 1.) are required to think deeply about certain
specific elements of professionalism and about professionalism in general and 2.) receive
information about their level of professionalism as judged by peers.
Methods
Second year medical students completed assessments of professionalism on approximately 20
other students from their clinical group, as well as themselves. This scale consisted of 13 items
and was rated on a scale of 1 (did not agree with statement) to 10 (strongly agree with
statement). Students were also instructed to mark questions they could not answer with a UA.
Students were also provided with space to write additional comments about the peer being
evaluated. All peer evaluations were completed anonymously. To maintain confidentiality and to
raise student confidence that the College will not have access to their assessments, the
Quantitative Research Unit (QRU) of the Department of Sociology collected data, performed
analyses for individual participants, and created and distributed individual reports. Educational
Support and Development was provided with de-identified data for the analyses used in this
report.
Results
Reliability
The peer assessment completed by students was found to have a high internal consistency
(Cronbach’s alpha = .95). Item-total statistics revealed that alpha would not change greatly if any
items were removed, indicating that all items were answered in a similar manner. Inter-item
correlations revealed very high correlations between Item 2 and Item 3 (r = .89) and Item 11 (r =
.90). Items with correlations greater than .90 indicate multicollinearity, meaning that two items
essentially measure the same thing. Thus, Items 3 and 11 may be removed to help reduce
redundancy.
The self-assessment ratings were also found to be internally consistent (Cronbach’s alpha = .85).
Item-total statistics revealed that the internal consistency would not change greatly if any items
were removed. No items had correlations greater than .80, indicating that multicollinearity was
not present for self-evaluations.
Peer Assessment
Overall, students (N = 85) received high ratings of professionalism from their peers (M = 9.34,
SD = .48). Scores ranged from 7.59 to 9.88. Distribution of scores was positively skewed, as
illustrated in the following histogram.
2
Self Assessment
Sixty-one students completed self-assessments of professionalism. Participants assessed their
professional skills highly (M = 9.32, SD = .60), with total scores ranging from 8 to 10.
Distribution of scores is presented in the following histogram.
3
Peer and self assessments were not significantly correlated (r = -.07, p = .61). However,
aggregate means for peer and self-assessments were very similar (9.34 and 9.32, respectively).
Thus, while self-assessments may have poor validity, aggregate self-assessment data may be
useful in assessing a group of students’ overall professionalism.
Student Evaluation
Students were asked to provide feedback on the evaluation process and items. Some students
commented that this process would result in useful feedback, as illustrated in the following
comments:
“Will give us all better ideas on how to improve ourselves both professionally and
outside the workplace.”
“I felt this was an excellent exercise. If it is possible it might be beneficial to have
evaluations due in January so that students have a chance to reflect and provide more indepth the back for each other. I hope all the information is objective as possible.”
Several students noted that they found it difficult to complete the evaluation as they did not have
adequate contact with their peers. This is reflected in the following comments:
“I was unable to evaluate many of the individuals in this assessment since I have never
been in a professional setting with them.”
“It would have been nice to have/evaluate certain individuals from pro-skills (Med I)
groups. I was has been in a better position to evaluate these individuals compared to
others whom I have not had the pleasure of working with.”
“I did not have many people that I had close interaction with included in those I was to
evaluate. This made the process quite tedious and frustrating because I did not have
much meaningful information to share.”
“I found it incredibly difficult to assess clinical experience objectively. I struggle to
separate personal relationship experiences from work/clinical experience especially
when clinical experience was so limited. (I may have had one patient encounter with each
individual I assessed, but often it was a group encounter).”
Some participants expressed concern over the anonymous evaluations. These students indicated
that they would prefer face-to-face discussions, as illustrated in the following comments:
“I would rather have final sessions with our individual groups at the end of the semester
to discuss some of the “professionalism” issues instead of an anonymous forum leaving
people resenting the disputed comments from peers. I recognize some kind of outlet is
necessary to address these issues; however, I think peer-to-peer confrontation is more
productive and a better learning tool than a backhanded survey.”
4
“I don’t feel the method of anonymously grading our peers is an appropriate method of
doing this exercise. In the future, some will be working in a self regulated procession
where we will have to speak directly to our workers to make suggestions/give praise.
Who makes these grades in what situation do yet thinking about when grading is
important and lacking in this process.”
“I am rather uncomfortable with the idea here assessment. I understand the importance
of being evaluated for this does not seem like a good tool. Anonymous criticism, that may
be in fact hurtful, does not benefit us and can create a very tension filled environment
between students. If one truly wants feedback costs holding face-to-face meeting would
be much better because the critiques can be discussed and thereby rather than giving in a
manner where there is no way to trace back to the person said it. I recognize how
anonymity can be desirable so we can be truthful in our statements but I feel the harm of
this is much greater than the benefit. At this stage we should be mature and professional
enough to confront individuals that we feel are displaying inappropriate and
unprofessional behaviour. In the future, this is the way we will be dealing with issues that
may arise between colleagues — therefore should we should use practicing and
modelling this way of evaluating rather than truth anonymous peer assessments. I also
believe people are hesitant to write their thoughts about people in fear of possible
consequences. Even though it is anonymous, our class is already aware of how harmful
negative evaluations can be and are apprehensive of how this may change our class
dynamics.”
Some participants requested greater clarification, as illustrated in the following comments:
“I don’t know, for some of the questions if we were supposed to be rating based on how
they are with patients or with peers. Although there shouldn’t be a difference — there is
— and I think there should be a separation in the questions “peers” vs. “patients.”
“I believe it would have been better if the scale would have been a bit more clarified in
addition, items such as “put suggestions to good use” are bit too open-ended. While I
see the value in this project I may have put more thought into comments had this been
done at a less busy time in the semester.”
“I have a suggestion for the scoring system. Perhaps having a more descriptive list of
scores (e.g. 10 = exceeds expectations, 9 meets expectations, etc.). Otherwise I find no
difference between 10/9 or 8/7, etc.”
Future Directions
Correlational analyses comparing peer assessment and MMI score will be conducted by
researchers from the QRU. Interviews will be conducted with approximately 10 participants to
better understand how students perceived this process. During the interview, participants will be
asked how helpful they found the formative report, whether they made any changes based on the
report, whether they felt they were able to accurately perform peer and self assessments, and any
5
suggestions for change. Finally, this process will be repeated with first year medical students in
early 2011.
Download