Uploaded by lucy.oh94

A Program Director's Guide to the Medical Student Performance Evaluation (former Dean's Letter) with a Database

advertisement
A Program Director’s Guide to the
Medical Student Performance Evaluation
(Former Dean’s Letter) With a Database
James B. Naidich, MDa, Gregory M. Grimaldi, MDa, Pamela Lombardi, MDa,
Lawrence P. Davis, MDa, Jason J. Naidich, MDa
Purpose: The value of the Medical Student Performance Evaluation (MSPE) for a program director is in the
information it contains comparing how a student performed in medical school relative to his or her classmates.
The Association of American Medical Colleges has recommended that a student’s class ranking be included in
the summary paragraph of the MSPE and that this information be repeated in a supplementary appendix.
Methods: The authors reviewed the MSPEs from 1,479 applications for residency training positions. The
aim was to determine to what extent and in what manner individual schools reveal how their students perform
relative to their peers. The authors then set out to create a database containing this information.
Results: Working from a list of 141 US members of the Association of American Medical Colleges,
complete information for 107 schools (76%) and partial information for the remaining 34 schools (24%) was
gathered. Only 12 schools (9%) included complete comparative information in the summary section in
accordance with the guidelines of the Association of American Medical Colleges. Other schools were in partial
compliance or did not comply at all. The database the authors constructed will inform users if comparative
information is available, guide users to its location in the MSPE, and explain the meaning of the language
different schools use to rank or classify their students.
Conclusions: The authors recognize that this database is incomplete and that the individual institutions
will alter their ranking system from time to time. But this database is offered in an open format so that it can
be continuously updated by users.
Key Words: Dean’s letter, MSPE, medical student performance, resident applications, student evaluations
J Am Coll Radiol 2014;11:611-615. Copyright © 2014 American College of Radiology
INTRODUCTION
The Medical Student Performance Evaluation (MSPE),
formerly the dean’s letter, has the potential to provide
valuable information, not available elsewhere, concerning how a medical student has performed relative to his
or her classmates [1]. This information is of particular
importance to program directors trying to decide which
applicants to invite for interviews. But despite the
importance of this information, as the Association of
American Medical Colleges (AAMC) wrote in its 2002
a
North Shore-LIJ Health System, Hofstra North Shore-LIJ School of Medicine at North Shore University, Department of Radiology, Manhasset, New
York.
Corresponding author and reprints: James B. Naidich, MD, North Shore-LIJ
Health System, Hofstra North Shore-LIJ School of Medicine at North Shore
University, Department of Radiology, 300 Community Drive, Manhasset,
NY 11030; e-mail: jnaidich@yahoo.com.
ª 2014 American College of Radiology
1546-1440/14/$36.00 http://dx.doi.org/10.1016/j.jacr.2013.11.012
Guide to the Preparation of the Medical Student Performance Evaluation [2], “A common recurrent complaint
of those who interpret deans’ letters of evaluation is that
too often it is impossible to estimate how a candidate
performed in comparison to his or her peers.” This is a
problem with a long history.
In 1989, Wagoner and Suriano [3] wrote a short
paper titled “A New Approach to Standardizing the
Dean’s Letter.” Their paper pointed out that the inconsistencies among different medical schools reduced
the value of the dean’s letter and that residency program
directors would benefit from a concise and comprehensive form that was consistent from school to school.
That same year, the AAMC first published guidelines
recommending that the dean’s letter contain information
that would allow readers to understand each medical
student’s performance compared with his or her peers
[4]. In 1993, Hunt et al [4] published a paper titled
611
612 Journal of the American College of Radiology/Vol. 11 No. 6 June 2014
“Characteristics of Dean’s Letters in 1981 and 1992.”
They reported that 45% of medical schools were not
compliant with the 1989 AAMC guidelines. The authors
warned that failure to provide comparative information
would diminish the value of the dean’s letter, with the
result that program directors would place too much
emphasis on “simple-minded numerical scores from
licensure examinations.” In a follow-up paper written in
2001, Hunt et al [5] reported some improvement, but
35% of schools were still noncompliant.
In 2002, the AAMC [2] again addressed the issue.
First, they renamed the dean’s letter the MSPE. Then the
AAMC specifically recommended that the MSPE
contain a summary section and that the summary section
include a student’s comparative performance in medical
school relative to his or her peers. The AAMC further
recommended that the summary section define the
school-specific categories used in differentiating among
the levels of student performance. The AAMC also recommended that the MSPE contain a supplementary
appendix D containing a graphic representation of the
student’s overall performance relative to his or her
classmates, with numerically defined boundaries for the
individual medical school’s specific categories.
In 2007, in our first paper [6], we documented deans’
continued indifference to compliance with the AAMC
recommendations. We echoed the concern of Hunt et al
[4,5] that the lack of evaluation accuracy of the MSPE
resulted in de facto ceding of this task by the deans of
American medical schools to the United States Medical
Licensing Examination. The problem with this, we
thought, is that the licensing examination scores, however objective, are a thin statistic that offers no insight
into a student’s interpersonal or communication skills,
medical professionalism, or other domains beyond
medical knowledge. This kind of information can be
included in a well-written dean’s letter and is information not found elsewhere in the application.
Shea et al [1], in a paper published in 2008, shortly
after ours, reached many similar conclusions. Importantly, they found that the summary paragraph provided
comparative information in only 17% of cases, contrary
to the 2002 AAMC guidelines.
Most recently, in December 2012, Green et al [7]
wrote an op-ed piece in Virtual Mentor: American
Medical Association Journal of Ethics titled “Standardizing
and Improving the Content of the Dean’s Letter.” Note
how similar this title is to that used by Wagoner and
Suriano [3] 23 years earlier. Not only the title but the
issues discussed have remained the same.
Clearly little has changed in the intervening years.
Too often it remains difficult to judge a student’s performance from reading the MSPE. To try a new
approach to remedy the situation, we set out to develop
a database. Our purpose was to create a tool to enable
program directors to quickly extract the useful information from the MSPE.
METHODS
Institutional review board approval was obtained for
this project.
We reviewed the Electronic Residency Application
Service submissions of students from the US AAMCaccredited medical schools [8]. The submissions were
for students applying for graduate medical education
positions in diagnostic radiology at 3 institutions: North
Shore University Hospital (Manhasset, New York),
Northwestern University (Chicago, Illinois), and Long
Island Jewish Medical Center (New Hyde Park, New
York) for the 2012e2013 academic year. There were
239 applicants to the North Shore program, 544 to
Northwestern, and 224 to Long Island Jewish. We then
reviewed an additional 472 applications to the Northwestern program for the 2013e2014 academic year.
The MSPE accompanying each of these 1,479 applications formed the basis for our investigation.
We set out to create a comprehensive database. Our
intent was that a program director reading an application from an unfamiliar school might access our database and quickly learn what comparative information
the individual medical school offered and where in the
MSPE, whether in the summary paragraph, an appendix, or another location, this information was located.
Furthermore, when comparative information was not
included in the MSPE, the database would so inform the
user, to prevent fruitless searches.
The first column of the database is an alphabetized list
of the 141 US AAMC member schools [8], including
their full names and their city and state addresses, for
easy identification.
We next examined the summary paragraphs of the
MSPEs to determine what comparative data they
included. A rare summary paragraph would contain the
exact class ranking of a student. Somewhat more often,
the summary paragraph would contain a numerically
defined category, perhaps the student’s performance
quartile. Most often, the student’s performance was
categorized using a descriptor. The most frequently used
descriptor was excellent. Often, but not always, the
descriptor was used in a defined hierarchy. The most
commonly used hierarchy, in descending order, was
some variant of outstanding, excellent, very good, and
good. The schools would often assign numeric boundaries for each of the descriptor categories. Although the
descriptor was frequently included in the summary
paragraph, its definition was not. Sometimes this information was located in Appendix D in accordance
with the AAMC guidelines. Sometimes the information
was included in another location in the MSPE. Sometimes the schools used undefined descriptors. Sometimes
there were no comparative data whatsoever. Sometimes
the schools would inform readers that it was their
intent to offer no comparative data, and other times they
would not. Regardless, we searched the MSPEs for
whatever school-specific comparative data they might
Naidich et al/Medical Student Performance Evaluation 613
contain, and this information was included in a cell
next to the school’s name. In this manner we created
our database.
The design of our database is thus two columns wide.
The first column contains the alphabetized medical
school names and other identifying information. The
second column contains the corresponding data we
collected concerning the content of the MSPE (Table 1,
online only).
Once the database was constructed, it provided the
means to investigate a number of interesting questions:
1. We identified the number of schools from which we
had no data.
2. We identified the schools from which we had
incomplete data. These were schools for which we
thought that more examples (larger sample sizes)
might be helpful in better understanding the descriptors used in the summary paragraph.
3. We counted the number of schools for which we had
complete data. These were schools for which we were
confident we had a complete understanding of the
MSPE content.
4. We counted how many medical schools would inform
readers in the summary paragraph of their students’
comparative performance in order to compare this
number to the 17% statistic reported by Shea et al [1].
5. We looked at use of the descriptor excellent. We
counted how many schools used excellent in their
summary paragraphs. We looked for schools that
used excellent in hierarchies with defined numeric
boundaries. For schools meeting this criterion, we
then graphed the range of excellent for 2013 (Fig. 1)
and compared it with a similar graph reproduced
from our 2007 data (Fig. 2).
6. Finally, we looked at the descriptor outstanding. We
counted how often schools would use outstanding as a
category descriptor for something less than their best
performers.
Note that we knew from the outset that there would
be duplicate submissions for academic year 2012e2013
because the same student might easily apply to 2 or all 3
Fig 1. 2013 Medical Student Performance Evaluation range
of excellence data, 41 medical schools.
Fig 2. 2007 Medical Student Performance Evaluation range
of excellence data, 35 medical schools. Three schools
defined the upper but not the lower boundary (blurred) for
their descriptors of excellent.
programs. We made no effort to correct for this because
we thought that repetition would improve accuracy.
Subsequently, we did cross-check our lists to determine
the number of unique applicants.
RESULTS
The chief outcome of this project is the database
(Table 1). As planned, the database is only two columns
wide. It is simple to use. With the database open, a
program director or coordinator using the keyboard
search function can rapidly locate a medical school of
interest. The adjacent cell will detail all the comparative
information we were able to find and where these
comparative data are located in that school’s MSPE.
There were 141 medical schools listed as US members
of the AAMC [8]. We had data from all 141 schools. We
had incomplete data from 34 medical schools (24%).
These were schools that did not define their descriptors.
The amount of information we had from these schools
varied. At one end of the spectrum were schools from
which we had available enough examples to know some
but not all of the descriptors used in the summary
paragraph. For example, if one school described a student as very good and another student from the same
school as outstanding, most probably the second student
had performed better than the first. For other schools,
we had access to enough student applications to form
reasonable estimates of the school-specific category
hierarchies, but because the schools did not formally
define their hierarchies, we were not completely confident of our estimates. We think that with more examples of the MSPEs from these 34 schools, our database
could better describe how these schools stratify their
students’ levels of performance. Regardless of our
opinion, all we did was label the information cell in the
second column of the database “Incomplete Data.” We
then included in the cell whatever information we had
available concerning each school’s ranking system to
allow users to form their own opinions.
614 Journal of the American College of Radiology/Vol. 11 No. 6 June 2014
There were 107 medical schools (76%) for which our
data were complete. Of these 107, 13 medical schools
were explicit about not ranking their students’ performance. These schools offered no comparative information in their MSPEs. The database will readily identify
these schools. There were 88 medical schools that fully
described their ranking systems and used well-defined
category descriptors with numeric margins. Finally,
there were 8 new medical schools that have not yet
published MSPEs.
Because Shea et al [1] found that 17% of medical
schools provided comparative data in the summary
paragraph we looked at our own numbers. We found in
our survey that only 10 schools (7%) would routinely
include comparative information in the summary paragraph. Add the 2 other schools that placed this information next to but not in the summary paragraph, and
there were 12 schools (9%) in compliance with this
AAMC recommendation. Three additional schools
offered this information in the summary section, but
only for their better students.
Concerning the choice of summary paragraph descriptors, 77 medical schools (55%) used excellent as a
descriptor in the summary paragraph, of which 41
(29%) used excellent in a defined hierarchy with defined
numeric boundaries. Figure 1 graphically represents the
range of excellent used by 41 medical schools in 2013.
Figure 2 graphs the range of excellent for 35 medical
schools in 2007. Qualitatively, they are similar.
Twelve schools used outstanding as their second best
category, supplanted 3 times by exceptional, 3 times by
distinguished, twice by superior, and once each by most
outstanding, top student, superb, and recommend with
distinction.
Of the 1,479 applications we surveyed, 329 were
duplicated or triplicated because the same student applied
to more than one program. There were 1,150 unique
applications.
DISCUSSION
There is an inherent conflict of interest between medical
school deans trying to make their graduating medical
students look as attractive as possible to place them in
the best graduate medical education programs available
and residency program directors trying to recruit the
best performing medical school graduates for their
postgraduate programs. Probably this is the reason deans
are reluctant to reveal their students’ comparative performance in the summary paragraph or sometimes not
reveal it at all. Shea et al’s [1] finding of 17% compliance was based on a survey of students graduating in
2005. Eight years later, we find that compliance has
dropped to 9%. Deans may become even more hesitant
to provide easily accessible comparative data in the
future. The growth of new medical schools is exceeding
the expansion of residency training programs [9]. This
will make it increasingly difficult for medical schools to
place their graduates in the specialty training programs
of their choice.
Medical schools have adopted a number of approaches
to circumvent the AAMC recommendation that the
MSPE provide readily accessible data to program directors. One such strategy, used more frequently by the
more prestigious schools, is to not rank their students.
We found 13 schools (9%) adopting this strategy. Examples include Stanford University and Yale University.
Our database can rapidly identify these schools to save
readers the effort of searching for nonexistent data. The
database cannot, of course, offer insight into applicants’
performance at such medical schools.
The drawback of not ranking students is that
although it may veil a poorer student’s performance, at
the same time, it does not reveal who the better performing students are. To get around this difficulty, some
schools will include students’ performance only when it
is favorable. Examples include the University at Albany
and Northeast Ohio Medical University.
Albany uses 3 descriptors in the summary paragraph;
recommend highly for the top third of their students,
recommend without reservation for the middle third, and
recommend as a positive presence for the bottom third.
When a student is in the top third of the class, the dean
will indicate this in the summary paragraph. However,
the dean will not inform the reader in the summary
paragraph that recommend without reservation designates
the middle third or recommend as a positive presence the
bottom third of the class.
For most students from Northeast Ohio Medical
University, comparative information was not included
in the summary paragraph, an explanatory appendix, or
elsewhere in the MSPE. However, an exception was
made for one student. His standing in the top quartile of
the class was proclaimed in boldface type on the first
page of the MSPE.
A variant of this strategy is to omit from the summary
paragraph the student’s ranking when it is at the bottom
of the class. Examples are Stony Brook University and
Indiana University. The database identifies this tactic.
A puzzling strategy used by some schools is to deny
ranking their students even when they do use defined
categories with numeric margins. Two examples are the
University of Chicago and Georgetown University. The
database provides the descriptor categories used by these
schools and their numeric boundaries.
Many schools will use attractive descriptors in the
summary paragraph and either not define the descriptors
or, more commonly, define their meanings in some
location remote from the summary paragraph. The most
frequently used descriptor is excellent, a word with a
pleasant connotation and ordinary English-language
definitions including “exceptionally good” and “of superior merit” [6]. However, MSPE writers assign very
different meanings to this word. Figures 1 and 2 show
that no school used excellent to describe its very best
Naidich et al/Medical Student Performance Evaluation 615
students and also show how frequently excellent
was used to describe students in the bottom half of their
classes. Whereas the University of Chicago used
excellent to describe students ranking between the 5th
and 43rd percentiles, The Ohio State University used
the same descriptor for students in the 71st to 90th
percentiles, and Vanderbilt University used outstanding
to describe the top quarter of its class and excellent
to describe everyone else. Although an MSPE reader
will usually be correct in assuming that outstanding is a
category reserved for a school’s best performing students,
we found 13 examples (9%) for which this was not true.
A program director encountering a descriptor that is
undefined in the summary paragraph can reference the
database with the expectation that it usually will make
clear the school’s idiosyncratic use of the word.
One limitation of our database is that it is not complete. We have only partial data concerning 34 medical
schools. However, even for these 34 schools, usually
there is enough information for the database to
be helpful. On the other hand, for the remaining
107 schools, although our data are complete, there are
13 schools for which all the database will reveal is that
these schools elect not to offer comparison information
concerning their students’ performance. And there
are 8 new schools that have not yet begun to write letters. Despite these limitations, we think this is a good
start.
Another concern is avoidable errors. There is just too
much information included in this database for it not to
contain some errors. To some extent, our methodology
worked in our favor to reduce errors. We reviewed 1,497
MSPEs from 3 different programs. This ensured that
there would be multiple different student applications
from the same medical school. Furthermore, we were
unconcerned that for academic year 2012e2013, the
same student might easily apply to 2 or all 3 programs.
We welcomed the repetition. We thought the more
applications we reviewed, the fewer would be our errors.
In all, we had 1,150 unique applications.
A third limitation is that what had been true for
2012e2013 may no longer be true for the future. Deans
may change their policies from year to year. New schools
will begin submitting Electronic Residency Application
Service applications. This database might best be looked
upon as a work in progress requiring continuous
updating. However, if it proves as useful a tool as we
hope, as better data become available, users can fill in
the gaps and correct the mistakes.
If the AAMC, medical school deans, and program
directors were ever able to reach a consensus concerning
the use of well-defined category descriptors with numeric
boundaries, this would enhance the value of the MSPE.
Ideally, a student’s class ranking should be based
not only on grades but also on nonclinical categories
of achievement, including professionalism, research, and
community service. The AAMC’s mission [10] does
include “supporting the entire spectrum of education,
research and patient care activities.” Activities that
contribute to this goal, we think, deserve more credit in
evaluating a student’s performance.
However, for the interim, this database is our solution
to medical school deans’ reluctance to provide easily
understandable comparative information concerning the
performance of their graduating students. The database
should speed users’ ability to locate relevant information
within an MSPE and to understand an applicant’s performance relative to his or her peers.
TAKE-HOME POINTS
The summary paragraph of an MSPE should describe
a student’s performance in medical school relative to
that of his or her peers.
Most often, this comparative information is not
included in the summary paragraph.
Too often, this information is not included anywhere
in the MSPE.
Accessing the attached database will describe what
information is made available from different medical
schools.
SUPPLEMENTARY DATA
Supplementary data can be found online at: http://dx.
doi.org/10.1016/j.jacr.2013.11.012.
REFERENCES
1. Shea JA, O’Grady E, Morrison G, Wagner BR, Morris JB. Medical
Student Performance Evaluation in 2005: an improvement over the
former dean’s letter. Acad Med 2008;83:284-91.
2. Association of American Medical Colleges. A guide to the preparation of
the medical student performance evaluation. Report of the Dean’s Letter
Advisory Committee. Washington: District of Columbia: Association of
American Medical Colleges; 2002.
3. Wagoner NE, Suriano JR. A new approach to standardizing the dean’s
letter. Acad Med 1989;64:688-9.
4. Hunt DD, MacLarean CF, Scott CS, et al. Characteristics of dean’s
letters in 1981 and 1992. Acad Med 1993;68:905-11.
5. Hunt DD, MacLarean CF, Scott CS, et al. A follow-up study of the
characteristics of dean’s letters. Acad Med 2001;76:727-33.
6. Naidich JB, Lee JY, Hansen EC, Smith LG. The meaning of excellence.
Acad Radiol 2007;14:1121-6.
7. Green MM, Sanguino SM, Thomas JX. Standardizing and improving
the content of the dean’s letter. Virtual Mentor 2012;14:1021-6.
8. Association of American Medical Colleges. Member directory search
result—medical school. Available at: https://members.aamc.org/eweb/
DynamicPage.aspx?site¼AAMC&webcode¼AAMCOrgSearchResult&
orgtype¼Medical%20School. Accessed December 9, 2013.
9. Iglehart JK. The residency mismatch. N Engl J Med 2013;369:297-9.
10. Association of American Medical Colleges. About the AAMC. Available
at: https://www.aamc.org/about/. Accessed December 9, 2013.
Download