Assessment - MedEdPORTAL

advertisement
MedEdPORTAL Publications
Educational Summary Report Template: Assessment
The Educational Summary Report (ESR) is a required part of all submissions to MedEdPORTAL Publications. The ESR
provides a summary overview of the entire submission and should serve as a guide for understanding the purpose
and scope of the resource. Authors should be aware that a MedEdPORTAL publication consists of the ESR and all
component resources contained within the submission.
This template is designed to help you develop your ESR in preparation for a successful submission of an assessment
instrument to MedEdPORTAL. The information below is specific to assessment submissions only; for other resource
types, follow the standard ESR template found in our Submission Instructions. Note that all information contained
in an accepted and published ESR will be viewable by the general public. Special clearance materials (i.e., those that
should remain restricted from students) must thus be part of the appendices.
Directions: Provide complete and succinct responses for each section. Ensure that your responses are publicationready by paying attention to spelling, grammar, and word choice. Use the same font and formatting found in this
document (10-point Calibri font, bolded headings, one-inch margins).
I.
Title (limit to 18 words)
Use a descriptive title that provides prospective users with a good understanding of your resource materials, the
intended learner population, and the resource’s implementation.
II.
Abstract (limit to 250 words)
Provide a structured abstract of the submission, including introduction, background/rationale for the work,
methods used, and brief summary of the results, including two to three sentences explaining how the resource
advances the field. Do not use references in your abstract.
III.
Educational Objectives
All objectives should be numbered and SMART (Specific, Measurable, Action-oriented, Relevant/Realistic,
Timely/Time-bound). Remember to use appropriate levels of Bloom’s taxonomy. Objectives should be
learner/learning-focused and reflect how the assessment measures targeted knowledge, skills/performance,
and/or attitudes/values. The objectives should also indicate how the assessment will be used to advance student
learning or program evaluation.
IV.
Introduction
Describe the construct being measured and the conceptual basis for that construct and its measurement. Connect
the development of this assessment with prior work (i.e., work developed by yourself and by others), including
theories, conceptual frameworks, and/or models. Include the target population and other contexts in which the
assessment can work. Provide details about the type of assessment and how it was administered. Specifically,
explain how this assessment fills a gap in health professions education and its relationship to other competencies,
milestones, or entrustable professional activities. Include observations, experiences, and/or evidence-based best
practices from outcome studies. Be sure to cite relevant references, including related or similar resources in
MedEdPORTAL Publications.
V.
Methods
Describe the development of the assessment, including how the construct was defined (e.g., Was literature review
on the construct done? Were construct experts consulted?), how and why items were selected, how the items are
supported by the literature, what items were borrowed from other assessments, and how this assessment differs
from previous assessments.
Explain how the assessment was implemented, including resources needed to utilize the assessment (e.g., space,
equipment, or technology requirements), training required for evaluators and examinees, and timing of delivery
(e.g., at the end of a unit or during a stage of training). Describe how feedback is provided to those being assessed
and whether remediation/re-teaching is intended based on the assessment results.
Describe in detail each of the files in your submission, including all instruments, assessment tools, templates,
protocols, videos, or faculty development resources you have provided. Ensure file names match those in your
submission form. Note any other resources, preparation guidance, and necessary materials (e.g., audiovisual
equipment, simulators, room setup, notepads, flip charts, etc.) and other key steps or instructions that would
facilitate success for users of your resource (e.g., promotional materials).
Include and explain any supplementary materials such as scoring rubrics or guides, evaluator training manuals, test
case(s), or other aids for administering the assessment. These types of aids are expected as components of an
assessment submission unless a rationale is provided as to why they do not apply to your assessment.
Be clear and specific enough so that a user could implement the material(s)/activities with the same or similar
quality and outcomes without having to seek out additional explanation. Include information regarding the context
for validity of the assessment. Validity evidence is categorized into one of five domains (Table 1). Detail the validity
evidence in at least one of the five domains shown in Table 1. Describe the strengths and weaknesses of the
method of developing and administering this assessment and how that impacts its validity.
VI.
Results
Describe the results of implementation, including the types of learners who experienced your materials, the
variety of uses for the assessment, and the evidence of success. Qualitative and/or quantitative data should be
provided. This may include learner satisfaction data, learner pre- and posttest data, an analysis of a post hoc focus
group, etc. Estimate the reliability of the results using one or more standard indices of reliability.
VII.
Discussion
Reflect on the entire process of design, development, evaluation, and dissemination. Reflect on the extent to
which the results successfully addressed the problem or opportunity described in the introduction. Convince the
reader that this resource will likely be useful to others. In particular, describe how many learners have been
assessed with this measure and in how many different contexts (e.g., problem-based learning curricula, single site
vs. dispersed models, etc.). Describe the assessment’s use at other institutions and/or published data using the
same instrument (provide relevant citations).
Evaluate the assessment in terms of cost, utility, contextual factors, impact on learners being assessed, etc.
Describe the challenges encountered, insights gained, limitations, future opportunities, and planned revisions.
VIII.
References
List all references in AMA style, as set out in the AMA Manual of Style (10th edition, section 1, chapter 3). The
reference list should be arranged in the order that references are first cited in the ESR and appendices.
In the text, cite references via superscript Arabic numerals (1, 2, 3, etc.) based on the corresponding number on the
list. For instance, the 13th reference on the list should be cited as 13 in text. When superscript numerals appear
next to punctuation marks, they should be placed after periods and commas but before colons and semicolons.
A helpful link on the basics of AMA style for references is http://www.biomedicaleditor.com/ama-style.html.
IX.
Appendices
In a published resource, all files other than the ESR will be linked to the ESR as appendices. Provide a list of these
files/appendices, labeling each by letter, beginning with Appendix A. Use the file names as appendix titles.
Provide additional files as subsequent appendices, which may include items such as faculty development materials,
summaries of clinical information needed for your resource, assessment tools, implementation data, video files,
PowerPoint presentations, and any other files needed to implement your resource.
Table 1. Examples of Validity Evidence*
Area of Validity
Questions to Ask
Content
Does the content (e.g., items)
of your assessment match the
construct that the assessment
is intended to measure?
Response
Process
How have sources of error in
administering the instrument
been minimized?
What sources of error affect
how individuals complete the
assessment?
What are the statistical
characteristics of the
assessment items?
Internal
Structure
How many dimensions are
measured by the assessment?
How reproducible are scores?
Relationship to
other variables
Has the assessment been
studied in terms of its
relationship to other similar or
dissimilar variables or
behaviors?
Examples
 Do items on an end-of-clerkship examination reflect the
learning objectives of that clerkship?
 How did you familiarize yourself with the construct? Did you
conduct a thorough literature review on the construct? Did
you consult with experts?
 Are students/faculty familiar with the assessment method
used? If not, have they been trained in the use of the
assessment?
 Is there consistency in how scores are interpreted/fed back
to examinees?
 Do individuals interpret assessment items the same way
(can be evaluated by cognitive interviews, "think aloud")?
 Are raters consistent in how they assess examinees (interrater reliability)?
 Is the assessment uni- or multi-dimensional (factor
analysis)?
 How well do items hang together (inter-item reliability)?
How well do items discriminate between high and low
performers?
 Are scores reproducible if the test if repeated over time
(test-retest reliability)?
 How consistent are evaluators in assessing learners (interrater reliability)?
 How much error in the assessment scores is attributable to
the examinees themselves versus the raters or the cases
(generalizability theory)?
 How well do scores on your assessment correlate with
scores on a criterion, or “gold standard”, assessment?
 Do scores on your assessment correlate with those of other
assessments that measure the same trait or construct?
 Do scores on your assessment predict future outcomes or
learner behaviors?


Consequence
What is the assessment’s
impact on examinees,
teachers, patients, and
society?




Is the assessment formative or summative?
If summative, what are the consequences of “failing”? What
are the pass/fail rates for your assessment and how do they
compare to pass/fail rates on similar assessments? How was
the decision made to set the cut scores/grades and what
was the rationale behind that decision? What are the
consequences of false-positive (passing someone who
should have failed) and false-negative (failing someone who
should have passed) results?
If someone performs poorly, is the construct (e.g. skill;
knowledge) remediable-can it be learned?
If formative, what learning resources accompany the
assessment?
Regardless of the type of assessment, are the resources
required to implement it "fair" for the system/faculty?
Does use of the assessment result in improved learning,
patient benefit, or other meaningful outcomes?
Table 1. Examples of Validity Evidence*
*For more information on sources of validity evidence, please see the references below:
1. Downing SM. Validity: on meaningful interpretation of assessment data. Med Educ 2003;37:830-7.
2. American Educational Research Association, American Psychological Association, National Council of Measurement in Education.
Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association.
Download