Assessment Rubric for Critical Thinking Assessment (ARC) Third Scoring Session Workshop Summary

advertisement
St. Petersburg College
Assessment Rubric for
Critical Thinking Assessment (ARC)
Third Scoring Session Workshop Summary
Fall 2011
Purpose
The purpose of this report is to describe the process of implementing and scoring the
Assessment Rubric for Critical Thinking (ARC). This scoring workshop was the third
complete workshop, outside of the initial pilot scoring workshop, which was conducted
three years ago. In addition to the administration and scoring process, this report
contains the scoring results and a discussion of the reliability of the ARC instrument.
Background
The ARC is a global rubric template developed for the College to provide a snapshot
view of how student learning is being affected by the critical thinking QEP initiative.
It is designed to assess a variety of student projects from a critical thinking
perspective. For example, students in a composition class may be asked to complete a
paper on a specific topic. The ARC rubric template is designed to evaluate the
student’s use of critical thinking skills in the development of the paper as opposed to
specifically evaluating the quality of student’s writing skills. The ARC rubric template
is designed to be flexible enough to address a number of student project modalities
including written and oral communications. A copy of the ARC as modified by the
Ethics Department is located in Appendix A.
Validating the ARC
The development of any quality rubric is a long and arduous process. The original
version of the ARC was developed by the College’s first team of faculty champions in
conjunction with QEP staff and resources from their various disciplines. Refinement of
this initial instrument included a thorough review by faculty and assessment experts
from around the college as well as a preliminary test of the instrument on a sample of
discipline-specific course projects. Faculty champions determined the quality and
usability of the rubric through the rating of student artifacts and recommended initial
modifications.
ARC Assignment Profile
The ARC Assignment Profile is designed to ensure consistency and accuracy in the
evaluation of the ARC at the institutional level as well as provide guidelines for the use
of the assessment at the course level. The ARC is essentially a “tool” to evaluate
critical thinking, but for a tool to be effective it must be used in the correct situation
or “job.” For instance, it would be inefficient to use a machete to conduct heart
surgery. The purpose of the ARC Assignment Profile is to outline the most appropriate
course assignment.
Academic Effectiveness and Assessment
Page 1
St. Petersburg College
1. Participating faculty should have one assignment during the course that can be
evaluated using the ARC scoring rubric. The course assignment could be a
graded homework assignment or a major assessment for the course.
2. The course assignment for the ARC should include all of the elements of the
rubric and should be aligned with the task outlined for each element.
Assignments that only evaluate some of the elements or are not aligned with
the specific ARC tasks will be considered incomplete and not used in the
institutional analysis.
3. Faculty may add additional discipline specific rubric elements (such as grammar
and punctuation in a composition class), but must maintain the ARC elements as
listed.
4. Students should be provided a copy of the assignment rubric (ARC and any
additional discipline specific elements). The specific elements and tasks
include:
a) Identification [Communication]: Define the problem in your own
words.
b) Research [Synthesis]: Suggest ways to improve/strengthen your final
solution (may use information not contained within the scenario).
c) Analysis [Analysis]: Compare and contrast the available solutions
within the scenario.
d) Decision Making [Problem Solving]: Select one of the available
solutions and defend it as your final solution.
e) Evaluation [Evaluation]: Identify the weaknesses of your final
solution.
f) Reflection [Reflection]: Reflect on your own thought process after
completing the assignment.
a. “What did you learn from this process?”
b. “What would you do differently next time to improve?”
5. The evaluating scenario (selected or created) should be stated in such a manner
to allow the student to address each of the tasks.
6. Completed student assignments should include a copy of the scenario, the
assignment provided to the student (with the rubric), the student’s work and
the final graded rubric.
ARC Administration
The ARC was administered in six course sections of PHI 1600, Studies in Applied Ethics.
The course was selected by Faculty Champions based on the chosen interventions to
improve critical thinking in the discipline and the course content. A random number
of representative sections were selected from the 90 sections of the course. Table 1
provides the course section information for the six sections as well as the number of
completed assignments and number of scored assignments.
Academic Effectiveness and Assessment
Page 2
St. Petersburg College
Table 1
Distribution of Assessments by Section of PHI 1600, Studies in Applied Ethics
Section
Completed ARC Assessments
Scored ARC Assignments
893
27
27
1355
8
8
1381
25
24
1538
12
12
2521
11
11
4553
23
23
Total
106
105
SPC Faculty Champions and interested members of their academic roundtables (ART)
were invited to participate in the ARC scoring session in November as part of their
required monthly critical thinking events. The invitation to participate was also
extended to other interested SPC faculty members.
Workshop Description
The ARC Scoring Session was held on November 18, 2011, at the District Office of St.
Petersburg College in Room 102. One-hundred and six ARC assessments were
administered in the six randomly selected course sections in PHI 1600, Studies in
Applied Ethics. One assignment was used as a sample paper for training while all of
the remaining one-hundred and five (100%) ARC assessments were scored at the
workshop.
The attendees included: Ashley Hendrickson (lead facilitator), Maggie Tymms
(facilitator), Janice Thiel (facilitator/scorer), George Greenlee, Lynn Grinnell, Carol
Rasor, Susan Demers, Dave Monroe, JoAnne Hopkins, Barbara Grano, Brandy Stark,
Cyndy Grey.
The workshop was scheduled from 8:00 AM to 5:00 PM. The workshop began at 8:30,
due to the late arrival of several participants. Janice Thiel, Director of QEP, welcomed
everyone and thanked them for participating. Each attendee was provided a copy of
the ARC, the ethics scenario for fall 2011, a sample paper with the ARC scoring
template, and the validity and reliability form. A copy of the ARC with modifications
made by the Ethics Department is located in Appendix A. The ARC scoring template is
located in Appendix B.
The agenda for the meeting was then reviewed by the lead facilitator. A copy of the
meeting agenda is located in Appendix C. The workshop began with a PowerPoint
presentation which discussed the overview of SPC’s Critical Thinking Definition and
Assessment. This was followed by a review of the ARC and a discussion of the sample
group. This consisted of a history and synopsis of the ARC development process, and
the purpose of creating the assessment as a tool for improving student success.
Academic Effectiveness and Assessment
Page 3
St. Petersburg College
Following the presentation, participants were trained in the scoring process using a
sample assessment from one of the selected PHI 1600 sections, #1381. The group was
given time to review the ethics scenario and the first section of the sample paper.
Each participant scored the first section and the group discussed the rationale for their
scores and discussed questions and considerations. After the group completed scoring
the entire sample paper, the ARC scoring session began. The following process was
followed to score the ARC assessments.
1. The participants were provided a copy of the discipline-specific scenario prior
to the start of the workshop. A copy of the scenario is provided in Appendix D.
2. Each scorer reviewed the responses provided by the student for the first
competency on his/her first assessment, and scored it based on the scoring
rubric. This process was completed for each of the six competencies on the
assessment.
3. Scorers who encountered a response which did not clearly follow the rubric
discussed the response with the group for clarification.
4. The scorer then passed the scored assessments to the person on their right, and
the test items were then scored by the second scorer.
5. In the event that two scores differed by more than one point, the assessment
was provided to a third scorer and a third score was recorded for that
competency.
6. Once all scoring was completed for an individual student paper, it was placed in
the middle of the table for collection by the facilitator.
Periodic breaks were offered to participants. Once the scoring of all assessments was
complete, a review and discussion session ensued. Participants were provided
evaluation sheets at the end of the day to rate the validity and reliability of the ARC.
A copy of the ARC Validity and Reliability Form is located in Appendix E. The day came
to a close at approximately 3:30 p.m.
Workshop Evaluation
Five participants completed an evaluation of the workshop, all of which were full-time
faculty. Three out of the five responding faculty (60.0%) selected Better understanding
of rubrics for grading or Interaction with faculty in other disciplines as their major
purpose for participating. Two faculty members (40%) reported one of their major
purposes was to gain a Better understanding of assessment of critical thinking, and
one faculty member (20%) selected the purpose of gaining a Better understanding of
critical thinking. Other purposes for participating were also offered by faculty, which
included “Assisting faculty/department in achieving a goal” (n=1) and “QEP obligation”
(n=1). The response Classroom strategies for promoting critical thinking was not
chosen as a major purpose for participating in the workshop.
One hundred percent (n=5) agreed or strongly agreed that:
1) The facilitator was knowledgeable and well-organized.
Academic Effectiveness and Assessment
Page 4
St. Petersburg College
2) After attending this workshop, I have a better understanding of the ARC
assessment process.
3) After attending this workshop, I feel I can build a new scenario or improve
upon an existing scenario that is closely aligned with the ARC rubric.
A majority of the faculty also agreed or strongly agreed that:
4) The information will be useful in my classroom or work (80%, n=4)
5) Participation in the ARC Scoring Workshop will impact my teaching (60%, n=3)
When asked for an open-ended response to the question, “Overall, what was the most
important or valuable thing you learned?” some of the comments received included:
the perspectives of other faculty members in relation to the ARC, a better
understanding of using rubrics to assess critical thinking, and the difference in
judgment when ‘grading’ an assignment versus ‘assessing’ with specific critical
thinking criterion.
Question 9 asked, “What would you like to know more about (relating to the ARC)?”
Two responses were received, where one faculty noted that they could not think of
any additional information they would like to receive, and the other would have liked
additional information regarding the relationship of the ARC to critical thinking.
Four of the five faculty (80%) agreed or strongly agreed that they would recommend
others attend the ARC Scoring Workshop. One faculty member (20%) disagreed, noting
that they did not understand why they spent the day scoring papers, as “This is
something that faculty already does.”
Comments for improvement to the workshop were logistical in nature and included
feedback such as shortening the scheduled time of the workshop and limiting the
number of papers scored per faculty member to no more than 10.
Participants were asked to identify which aspects of SPC’s definition of critical
thinking they found to be the most difficult to assess. Three faculty members reported
that Evaluation was difficult to assess (60%); two faculty found Problem Solving (40%)
to be difficult. Analysis, Synthesis, and Reflection were also mentioned as difficult
aspects to assess (each mentioned one time, 20%). Only one faculty member identified
more than one area as being difficult to assess.
The workshop survey can be found in Appendix F.
Academic Effectiveness and Assessment
Page 5
St. Petersburg College
Results
To ensure consistency of scoring, multiple raters independently scored each student
assignment. If the initial two raters did not sufficiently agree (where their scores
differed by more than one point), then a third rater was asked to assess that item on
the student’s work. Table 2 displays the frequency of times that a third rater was
required to score an item by competency, for each of the sections, and overall.
Table 2
Frequency of Assessments Requiring a Third Rater
Frequency by Section
893
1355
1381
1538
2521
4553
Identification [Communication]
1
0
3
0
1
1
Research [Synthesis]
0
1
4
0
1
2
Analysis [Analysis]
Decision Making [Problem
Solving]
Evaluation [Evaluation]
1
0
3
0
1
1
1
0
3
0
1
1
2
1
3
0
1
1
Reflection [Reflection]
3
1
3
0
1
1
2
4
0
1
2
# of Individual Assessments
Requiring a Third Rater (n=15)
6
Fifteen individual papers (14.3%) required a third rater on one or more competencies.
Of those, seven papers required a third rater on only one of the six competencies,
three papers required it for two competencies, and five papers required a third rater
on all six competencies. The Reflection competency was the most frequently assessed
competency of the six, with nine papers requiring a third scorer. It is worth noting that
only one section, #1538, did not have a single paper that required a third rater, even
though this course contributed nearly one-quarter (23%) of the papers that were
assessed. On the other hand, section #1381 had four papers that required a third rater,
three of which required it on all six competencies. By analyzing these consistency
results for raters, Ethics Faculty can identify weaknesses on both the student and
faculty side, and identify opportunities to improve various aspects related to the
assignment.
Student scores for each competency were created by averaging the rubric scores from
two or three raters depending on the level of agreement between the first two raters.
The results of the 105 scored assessments show an overall mean score of 13.5, from a
highest possible score of 24, and a standard deviation of 4.7. The highest overall
student score for any student assessment was 24 as shown in Table 3.
Academic Effectiveness and Assessment
Page 6
St. Petersburg College
Table 3
Overall Scores
Total
Mean
Standard
Deviation
Minimum
Maximum
Identification [Communication]
105
2.4
1.1
0.0
4.0
Research [Synthesis]
101
2.6
0.9
0.5
4.0
Analysis [Analysis]
101
2.5
0.9
0.0
4.0
Decision Making [Problem Solving]
100
2.4
0.9
1.0
4.0
Evaluation [Evaluation]
100
2.1
0.9
0.0
4.0
Reflection[Reflection]
96
2.1
1.1
0.0
4.0
Total
105
13.5
4.7
3.5
24.0
Competency
In overall competency scores, Synthesis had the highest mean score (2.6) followed by
Analysis (2.5) and Communication and Problem Solving (2.4). The two lowest mean
scores were in Evaluation and Reflection (2.1). Although further analysis is needed to
determine any significance, it is worth noting that mean scores increased in all six
competencies between fall 2010 and fall 2011.
Overall Totals by Section
Section 1381 (15.4) and Section 1355 (15.1) had the highest overall scores for the ARC
as shown in Table 4. This was followed by Section 1538 (13.2) and section 2521 (13.1).
The highest overall score on the ARC was from a student in Section 1381 (24.0).
Table 4
Overall Totals by Section
Section
Standard
Deviation Minimum
Total
Mean
893
27
12.2
4.8
4.5
21.5
1355
8
15.1
3.7
11.0
22.5
1381
24
15.4
4.8
5.5
24.0
1538
12
13.2
3.0
8.0
19.5
2521
11
13.1
5.9
3.5
20.5
4553
23
13.0
4.4
5.5
20.0
Academic Effectiveness and Assessment
Maximum
Page 7
St. Petersburg College
Results by Section
Results by each of the Ethics sections can be found in tables 5 to 10 in Appendix G.
Validity and Reliability
Validity and reliability can be evaluated in numerous ways. For the purposes of this
report, we evaluated the ARC’s validity and reliability as it relates to this single
administration. The validity established from the development of the instrument by
content experts is discussed earlier in the document.
At the end of the ARC scoring workshop, participants were provided evaluation sheets
to express their thoughts on the validity and reliability of the ARC. A total of six raters
completed an evaluation. Table 11 contains the results of these evaluations. A copy of
the ARC Validity and Reliability Form is located in Appendix E.
Table 11
ARC Validity and Reliability Results
N
Yes
(%)
Validity
Consequences
Focus: The effects of the assessment
1. Is the assessment likely to produce results that will be used to improve
instructional programs or otherwise improve student learning?
6
100%
2. Does the assessment comprehensively cover the content and processes assessed?
6
100%
3. Is the content covered in sufficient breadth and depth?
6
100%
4. Does the assessment represent important (not trivial) components of the
content?
6
100%
5. Together, will the assessments provide sufficient evidence about the content?
6
100%
6. Is the assessment consistent with the best available conceptualization of the
knowledge or skill assessed?
5
100%
7. Does the assessment represent current, rather than outdated, perspectives?
6
100%
4
100%
Content Coverage
Focus: Comprehensiveness of assessment content
Content Coverage (continued)
Focus: Comprehensiveness of assessment content
Content Quality
Focus: Consistency with current content conceptualization
Transfer and Generalizability
Focus: Whether assessment is representative of a larger domain
8. Can the assessment results be generalized to the broader domain (knowledge,
skill, or learning outcome) they are intended to represent?
Academic Effectiveness and Assessment
Page 8
St. Petersburg College
N
Yes
(%)
Validity (continued)
Cognitive Complexity (continued)
Focus: Whether level of knowledge assessed is appropriate
9. Do the assessment tasks or questions represent the cognitive complexity of the
knowledge or skill that it is intended to assess? (For example, if an outcome
includes higher order or critical thinking skills--such as problem solving or
synthesis--does the assessment measure them?)
6
100%
10. Does the assessment actually require students to use higher-level knowledge or
skills, or can students simply respond from memory without having to think?
6
83%
11. Are assessment items or tasks meaningful to students?
6
100%
12. Is the assessment relevant to problems students will encounter again in school,
work, or daily living?
6
100%
13. Does the assessment provide students with worthwhile or meaningful
experiences?
6
100%
14. Is the assessment biased against students who are members of various racial,
ethnic, and gender groups or students with disabilities? Does it contain
stereotypes of any groups? [*Note: Inversely stated item]
6
0%
15. Do students of similar ability, regardless of group membership, score the same?
2
100%
Meaningfulness
Focus: The relevance of the assessment in the minds of students
Fairness
Focus: Fairness to members of all groups
Cost and Efficiency
Focus: The practicality or feasibility of an assessment
16. Is the assessment a reasonable burden on teachers, instructional time, and
finances?
5
17. Is resulting information worth the required costs in money, time, and effort?
6
80%
83%
Yes
N
(%)
Reliability
1. Are the score categories well defined?
4
75%
2. Are the differences between the score categories clear?
4
25%
3. Would two independent raters arrive at the same score for a given student
response based on the scoring rubric?
4
100%
Linn, R.L., E.L. Baker, and S.B. Dunbar. (1991). Complex, performance-based assessment: Expectations
and validation criteria. Educational Researcher 20(8), l5-23.
Academic Effectiveness and Assessment
Page 9
St. Petersburg College
Of the six raters who completed the form, most of them responded to every question.
A majority of the seventeen validity items received perfect (100%) agreement from the
participants. An exception was item #15 which addressed fairness to members of all
groups. After discussion with participants, it was identified that they felt unprepared
to answer the question – meaning that they felt they needed data in order to
accurately respond – so they chose not to respond. This item, in addition to item #14
which was inversely stated – and the intent was for participants to select a “no”
response – will be reviewed and revised prior to the next scoring session, to eliminate
any confusion.
Four of the six raters responded to the three reliability items, where the other two
raters left the items blank. One item had a perfect score (100%); one item had a
majority of respondents agree (75%); and the remaining item had 25% of the
participants respond “yes.” Again some follow-up may be warranted to address why
some participants felt that two independent raters would not arrive at the same score
for a given student.
At the bottom of the form, participants also had the opportunity to provide feedback
on the scenario. Here are a few of the comments received.

It's very good, however, the differences between 3 and 4 are somewhat
subjective and the differences between 2 and 3 might be a little too much.

I found it difficult in rating same papers because the response appeared to be
between 3 points and 2 points.

Item #15 is beyond my purview, lacking that data.

Item #15, based on my estimate of the workshop, my answer would be "yes,"
though I understand that we do not have hard demographics for this workshop.
The qualitative information provided by the participants at the end of the form will be
helpful for faculty in evaluating the quality of their scenario and their scenario’s
alignment to the ARC.
Using the student scores, correlations were calculated between the six rubric
competencies to establish the strength of the relationships. These correlations are
provided in Table 12.
Academic Effectiveness and Assessment
Page 10
St. Petersburg College
Table 12
Item Correlations between Competencies
Identification
[Communication]
Research
[Synthesis]
Analysis
[Analysis]
Decision Making
[Problem
Solving]
Evaluation
[Evaluation]
Identification
[Communication]
1.0
Research
[Synthesis]
.5
1.0
Analysis
[Analysis]
.6
.6
1.0
Decision Making
[Problem Solving]
.4
.6
.6
1.0
Evaluation
[Evaluation]
.4
.4
.6
.6
1.0
Reflection
[Reflection]
.5
.4
.4
.5
.5
Reflection
[Reflection]
1.0
Correlations of .4 represent the range of .370 - .445; Correlations of .5 represent the range of .450 - .543; Correlations of
.6 represent the range of .556 - .619.
The correlations range from a low of 0.37 between Identification and Evaluation to a
high of 0.62 between Problem Solving and Evaluation. The percentage of average
student scores by competency is provided in Table 13. Non-integer scores (e.g., x.33,
x.50, x.67) were not aligned with descriptors on the rubric and are the result of
averaging scores between raters. The mode, or most common occurring score, was a
score of 2.0.
Table 13
Percentage of Average Student Scores by Competency
Identification
Average Student [Communication]
Score
%
Research
[Synthesis]
Analysis
[Analysis]
Decision Making
[Problem Solving]
Evaluation
[Evaluation]
Reflection
[Reflection]
%
%
%
%
%
0.0
1.0
0.0
1.0
0.0
1.0
4.2
0.5
5.7
2.0
1.0
0.0
2.0
6.3
1.0
9.5
4.0
5.0
8.0
19.0
18.8
1.3
0.0
0.0
0.0
1.0
1.0
0.0
1.5
11.4
7.9
9.9
14.0
12.0
8.3
1.7
0.0
1.0
0.0
0.0
1.0
0.0
Academic Effectiveness and Assessment
Page 11
St. Petersburg College
Table 13 (continued)
Percentage of Average Student Scores by Competency
Identification
Research
Analysis Decision Making Evaluation Reflection
Average
[Communication
[Synthesis]
[Analysis]
[Problem Solving] [Evaluation] [Reflection]
Student
]
Score
%
%
%
%
%
%
2.0
16.2
23.8
27.7
27.0
26.0
12.5
2.3
0.0
1.0
0.0
0.0
0.0
0.0
2.5
12.4
17.8
13.9
14.0
12.0
20.8
2.7
0.0
0.0
2.0
0.0
0.0
2.1
3.0
18.1
16.8
21.8
14.0
15.0
9.4
3.3
1.9
1.0
0.0
0.0
1.0
0.0
3.5
10.5
13.9
8.9
14.0
5.0
12.5
3.7
0.0
0.0
0.0
0.0
0.0
1.0
4.0
13.3
10.9
8.9
8.0
5.0
4.2
Total (N=105)
105
101
101
100
100
96
It was difficult to calculate inter-rater reliability, due to the structure of the data.
With 15 raters and one to two ratings per student, there is a considerable amount of
missing data when applying a traditional analysis. Therefore, measures were computed
in terms of internal consistency, and the 105 assessments had a Cronbach’s Alpha
(standardized) of 0.87. This is sufficient considering that there are only six items in
the rubric and it is higher than the suggested value of 0.70 provided by Nunnally and
Bernstein (1994).
Conclusion
The ARC results were analyzed from a quantitative as well as a qualitative perspective
to establish the quality, reliability, and validity of the assessment instrument. Based
on these validation results, additional refinements and modifications may be made to
the instrument to ensure the quality of the standardized assessment. Rubric results
will be reevaluated after each administration, and additional refinements and
modifications may be made to the instrument, as the assessment development and
validation is intended to be an on-going dynamic process.
The second major requirement in meeting the accreditation requirement standards of
the Southern Accreditation of Colleges and Schools (SACS) is a quality enhancement
plan (QEP). The QEP is a significant issue related to student learning that is facultydriven, and has a broad-based involvement. Critical thinking has been the QEP focus
at SPC. The ARC will assist the institution as one of multiple measures assessing SPC’s
ability to carry out the QEP.
Academic Effectiveness and Assessment
Page 12
St. Petersburg College
When surface-level comparisons were made against the performance data from fall
2010, students in fall 2011 performed better, overall, on all six competencies. The
highest mean score in fall 2011 was 2.59 on the Synthesis competency, as compared to
2.26 in fall 2010 – which was also the highest mean score that year. Fall 2011 students
had a mean score of 2.43 on the lower-order competency Communication, where the
mean score for the same competency was 2.26 in fall 2010. Similar to last year is the
order in which the means for the competencies of Evaluation and Reflection ranked
among the others. Although it appears that students continued to experience difficulty
in these areas from fall 2010 to fall 2011, the means in both of these higher-order
competencies also increased in fall 2011. Additional analysis is needed to determine
the significance, if any, of the differences in scores between the two years.
While there were some differences between the scores by section in fall 2011, as
indicated in the results (range of means between 12.2 and 15.4), it is not easy to
determine if these differences were related to student differences such as the length
of tenure at the college. Various suggestions have been made to improve the process.
These include capturing student demographic data, as well as standardizing the
scenario used for the institutional assessment.
Despite these limitations, the ARC process has been beneficial to the college. The
faculty who participated in the training and had the opportunity to utilize the scoring
rubric will have transferable skills they can use in the future with their students. The
administrators and faculty who conducted the training are able to continue to provide
professional development to faculty. The continued use of quantifiable instruments to
determine SPC’s effective implementation of the critical thinking initiative is another
example of SPC’s Institutional Effectiveness model for continuous improvement at the
college.
Academic Effectiveness and Assessment
Page 13
St. Petersburg College
Appendix A: ARC Rubric with Modifications by Ethics Department
APPLYING THE CRITICAL THINKING MODEL
Rubric for the Critical Thinking Application Paper
Applied Ethics/St. Petersburg College
Fall Session 2011 0445
NOTE: Chapter 4 “Critical Thinking” in your textbook has a detailed explanation of the critical
thinking model and how to apply it.
1.
Identification (10 points possible)
Identify the central ethical issue present in the case. What is the main ethical problem Augustine must
resolve? Furthermore, you must identify as many OTHER ethical issues, questions, or problems as you can
find in the scenario. Distinguish the central issue from the others you identified. Use details and examples to
communicate and explain your response. Be sure to focus on and apply the critical thinking model to the
central issue throughout the rest of this paper.
Rubric: The following is a rubric. A rubric tells you how you will be graded for this assignment. For example, in this section you can earn 810 points for identifying ethical ideas or issues with numerous supporting details and examples which are organized logically and coherently.
4 points
Identifies and
distinguishes the central
and secondary ethical
ideas or issues with
numerous supporting
details and examples
which are organized and
communicated logically
and coherently
2.
3 points
Identifies and distinguishes
the central and secondary
ethical ideas or issues with
some supporting details
and examples
communicated in an
organized manner
2 points
Identifies and distinguishes
the central ethical issue
and some secondary issues
with few details or
examples; communicated
in a somewhat organized
manner
1 point(s)
Identifies and
distinguishes the
central ethical issue
poorly. Communicated
with almost no details,
with little organization
and lacking recognition
of secondary issues.
0 points
Does not identify
the central ethical
issue and/or
includes no other
ethical issues.
Research (10 points possible)
Gather information relevant to the central ethical issue. Use a minimum of three outside sources to gain a
better understanding of the issue and potential solutions/options. Explain relevance of information found.
(Your instructor will provide specific details regarding appropriate sources and citation format. You can also
check with a college librarian for help with research and citations.)
4 points
Insightfully relates
concepts and ideas from
multiple sources; uses new
information to better
define issue and identify
options; recognizes missing
information; correctly
3 points
Accurately relates
concepts and ideas from
multiple sources; uses
new information to
better define issue and
identify options; correctly
identifies potential
Academic Effectiveness and Assessment
2 points
Inaccurately or
incompletely relates
concepts and ideas from
multiple sources; shallow
determination of effect
of new information; or
limited sources
1 point(s)
0 points
Poorly integrates
Does not identify
information from
new information
more than one source
to support final
solution; Incorrectly
predicts the effect of
new information
Page 14
St. Petersburg College
identifies potential effects
of new information
3.
Analysis (15 points)
Compare and contrast available solution/ options relevant to the central ethical issue. Using logical
(inductive or deductive) moral reasoning, clearly explain the ethical implications the potential
solutions/options may have on the stakeholders.
4 points
Uses specific inductive or
deductive moral reasoning
to make inferences
regarding premises;
addresses implications and
consequences for the
stakeholders; identifies
facts and morally relevant
information correctly
4.
effects of new
information
3 points
Uses logical reasoning to
make inferences
regarding options;
addresses implications
and consequences for
the stakeholders;
Identifies facts and
morally relevant
information correctly
2 points
Uses superficial
reasoning to make
inferences regarding
options; major
stakeholder (s)
missing; Shows some
confusion regarding
facts, opinions, and
morally relevant,
evidence, data, or
information
1 point(s)
Makes unexplained,
unsupported, or
unreasonable inferences
regarding options;
irrelevant stakeholders
identified; makes
multiple errors in
distinguishing fact from
fiction or in selecting
morally relevant
evidence.
0 points
Does not analyze
multiple
solutions/options.
Major stakeholders
not identified.
Application [Not part of the ARC process] (30 points)
Apply two ethical theories to reach a resolution of the central ethical issue. What would the central
principles of each theory imply is the morally right or best course of action or option?
A. Apply one (1) consequential theory (Act or Rule Utilitarianism) 15pts.
B. Apply one (1) non-consequential theory (Deontology (KANT), Contractarianism, Natural Rights,
Natural Law or Virtue Ethics) 15 pts.
For each (A and B) you are to resolve the central ethical issue using the central principles of the
theory. A brief summary of the theory should be included and you are encouraged to use the “Steps
in Applying” the theories presented in Chapters 5 and 6.
4 points
Central principles of the
theories are logically and
systematically explained
and applied to the central
ethical issue to reach a
resolution of the main
problem.
3 points
Central principles of
the theories are
explained and applied,
but may not be
logically consistent or
applied to specifics of
case.
Academic Effectiveness and Assessment
2 points
Applications of the
central principles of
the theories and the
summary may be
shallow, cursory, or
too general.
1 point(s)
Applications of the central
principles of the theories and
the theory summaries are
either missing or are not
connected to the central
issue and options in the case.
0 points
Does not apply
central principles to
reach a resolution of
main issue. Summary
of theory missing.
Page 15
St. Petersburg College
5.
Decision-Making (10 points)
Choose the wisest, most ethical option and justify your decision. This is NOT an opinion. Using your
research and analysis of the options and stakeholders and your applications of the ethical theories,
laws, and rules, select and defend the morally right (or most ethical) resolution to the central
ethical issue. Using facts and relevant evidence from your research and analysis, thoroughly explain
why this is the best solution.
4 points
Thoroughly identifies
and addresses key
aspects of the issue
and insightfully uses
facts and relevant
evidence from
analysis to support
and defend potentially
valid solution
6.
3 points
Identifies and addresses
key aspects of the issue
and uses facts and
relevant evidence from
analysis to develop
potentially valid
conclusion or solution
2 points
Identifies and addresses
some aspects of the
issue; develops possible
conclusion or solution
using some
inappropriate opinions
and irrelevant
information from
analysis
1 point(s)
0 points
Identifies and addresses only Does not select and
one aspect of the issue but
defend a solution
develops untestable
hypothesis; or develops
invalid conclusions or
solutions based on opinion or
irrelevant information.
Evaluation (10 points)
Identify and provide a minimum of three counter arguments against the option that you selected as being
morally right (or ethically best). What are the possible arguments against the resolution/option you chose?
How would you defend against those arguments?
4 points
Insightfully interprets
data or information;
identifies obvious as well
as hidden assumptions,
establishes credibility of
sources on points other
than authority alone,
avoids fallacies in
reasoning; distinguishes
appropriate arguments
from extraneous
elements; provides
sufficient logical support
3 points
Accurately interprets data
or information;
identifies obvious
assumptions, establishes
credibility of sources on
points other than authority
alone, avoids fallacies in
reasoning; distinguishes
appropriate arguments
from extraneous elements;
provides sufficient logical
support
Academic Effectiveness and Assessment
2 points
Makes some
errors in data or
information
interpretation;
makes arguments
using weak
evidence; provides
superficial support
for conclusions or
solutions
1 point(s)
Interprets data or
information incorrectly;
Supports conclusions or
solutions without evidence
or logic; uses data,
information, or evidence
skewed by invalid
assumptions; uses poor
sources of information;
uses fallacious arguments
0 points
Does not evaluate
data, information, or
evidence related to
best option.
Page 16
St. Petersburg College
7.
Reflection (5 points)
Reflect on your own thought process. What did you learn from this process? What could you do differently
next time to improve the problem-solving process?
4 points
Identifies strengths and
weaknesses in own thinking:
recognizes personal
assumptions, values and
perspectives, compares to
others’, and evaluates them
in the context of alternate
points of view
8.
3 points
Identifies strengths and
weaknesses in own
thinking: recognizes
personal assumptions,
values and perspectives,
compares to others’,
with some comparison
of alternate points of
view.
2 points
Identifies some
personal assumptions,
values, and
perspectives;
recognizes some
assumptions, values
and perspectives of
others; shallow
comparisons of
alternate points of
view
1 point(s)
Identifies some
personal
assumptions,
values, and
perspectives;
does not consider
alternate points of
view
0 points
Does not reflect on
own thinking
Writing/Composition [Not part of the ARC process] (10 points)
Remember that this is a Gordon Rule writing assignment; your paper must be at least 2,000 words long. 10
points of your grade will be based on the writing skills you demonstrate in the paper. So organize your
thoughts carefully, explain them clearly, and proof-read carefully for errors in grammar and spelling.
10-8 points
Writing is clear, coherent, and wellorganized. Very few grammar or spelling
errors. Format meets college standards.
All sources are cited.
7-4 points
Overall writing is acceptable, but
clear weaknesses in organization,
clarity, grammar or spelling.
Format is acceptable. Some
sources are cited.
3-0 points
Writing is unacceptable. Poor organization,
meanings are not clear, and/or numerous errors
in grammar or spelling. Format is poor. No
sources cited.
NOTE: Make sure to review the “Instructor Guidelines for Success” and the information on
Gordon Rule assignments that your instructor has also provided.
Academic Effectiveness and Assessment
Page 17
St. Petersburg College
Appendix B: SPC’s Assessment Rubric for Critical Thinking (ARC) Scoring Template
Paper ID:
Name of Scorer 1:
Name of Scorer 2:
Name of Scorer 3:
Date Scored:
Performance Element
Score
4
3
2
1
Score
0
NA
4
3
2
1
Score
0
NA
4
3
2
1
0
NA
I. Identification [Communication]
     
     
     
Define problem in your own words.
II. Research [Synthesis]
     
     
     
Suggest ways to improve/strengthen your final solution.
III. Analysis [Analysis]
     
     
     
Compare & contrast the available solutions.
IV. Decision-Making [Problem Solving]
     
     
     
Select & defend your final solution.
V. Evaluation [Evaluation]
     
     
     
     
     
     
Identify weaknesses in your final solution.
VI. Reflection [Reflection]
Reflect on your own thought process.


“What did you learn from this process?”
“What would you do differently next time to
improve?”
General Comments:
Academic Effectiveness and Assessment
Page 18
St. Petersburg College
Appendix C: ARC Scoring Workshop Agenda
ARC Scoring Workshop Agenda
Date: Friday, November 18, 2011
Time: 8:00 – 5:00
Location: District Office – consular conference room
Agenda:
8:00 – 8:30
Meet & Greet and breakfast munchies
8:30 – 9:15
Introductions and ARC Presentation
9:15 – 10:15
Training Session
10:15 – 10:30 Break
10:30 – 12:00 Scoring Session
12:00 – 1:00
Lunch (Provided)
1:00 – 3:00
Scoring Session
3:00 – 3:15
Break
3:15 – 4:45
Scoring Session
4:45 – 5:00
Wrap Up Discussion & ARC Validity and Reliability Form
Academic Effectiveness and Assessment
Page 19
St. Petersburg College
Appendix D: ARC Scenario
St. Petersburg College Applied Ethics Program
Critical Thinking & Application Paper
Fall Session 2011 0445
Instructions: Please read the following case study; then follow the instructions on the document
entitled “Applying the Critical Thinking Model”. Instead of writing one, long traditional essay based
on the questions, reflect on each question and then answer each question individually, giving a
detailed and thorough answer for each question. Please use the document entitled “CTAP Help
Sheet and Template” as a guide in preparing your assignment.
Success in a Bottle
Augustine McDaimen has not yet recovered from his high school senioritis. During his first years of
high school he made all A’s and B’s, and after taking some honors classes ended up with a pretty
high GPA. Senior year he found it more difficult to concentrate, spending a lot more time with his
friends and losing his motivation. When graduation finally arrived, Augustine felt relieved to have
made it through, and liberated to be moving on to college.
Augustine chose to go to Monrose College with a group of friends, and together they rented a
house off campus. Augustine always wanted to get into engineering, and he knew he was smart
enough but he was off to a slow start. Habits he had started in high school were not helping. He
stayed up most of the night, occasionally drinking and using marijuana. Now that he was away
from home, it was easier to fall into bad habits. He was even engaging in occasional binge drinking.
Things seemed to be getting out of hand. Augustine was not focusing on his school work. Before
he knew it, midterms were around the corner.
A week later he had an important history assignment due, that he had put off. He was having
trouble focusing and was afraid he would not get it done. His friend Ralph offered him an
Adderall 30 mg. Augustine hesitated at first, but then decided that it may be his best option if he
wanted to get his assignment done. He felt good, and was able to focus and get the assignment
done on time. For the next couple of days Augustine didn’t feel quite like himself, and his friends
mentioned that he was acting differently. He didn’t think much of it and kept partying and slacking
as usual.
Several weeks later it was time for midterms. Augustine had not kept on top of the material and
was definitely not ready. He also had a group project and a 2000 word paper due in the space of
two weeks. If he wants to keep his scholarship, he has to keep his GPA over 3.0. Ralph was down
the hall. He had an Adderall 30 mg prescription for his ADHD, but rarely used all his pills and would
stockpile around midterm and finals.
Academic Effectiveness and Assessment
Page 20
St. Petersburg College
Central ethical issue to resolve: What should Augustine do?
Academic Effectiveness and Assessment
Page 21
St. Petersburg College
Appendix E
ARC Validity and Reliability Form
Rater (scorer) Name: _________________________________ Date: _________________
Yes / No
Validity
Consequences
Focus: The effects of the assessment
4. Is the assessment likely to produce results that will be used to improve instructional
programs or otherwise improve student learning?
 
Content Coverage
Focus: Comprehensiveness of assessment content
5.
6.
7.
8.
Does the assessment comprehensively cover the content and processes assessed?
Is the content covered in sufficient breadth and depth?
Does the assessment represent important (not trivial) components of the content?
Together, will the assessments provide sufficient evidence about the content?








Content Quality
Focus: Consistency with current content conceptualization
9. Is the assessment consistent with the best available conceptualization of the
knowledge or skill assessed?
10. Does the assessment represent current, rather than outdated, perspectives?
 
 
Transfer and Generalizability
Focus: Whether assessment is representative of a larger domain
11. Can the assessment results be generalized to the broader domain (knowledge, skill, or
learning outcome) they are intended to represent?
 
Cognitive Complexity
Focus: Whether level of knowledge assessed is appropriate
12. Do the assessment tasks or questions represent the cognitive complexity of the
knowledge or skill that it is intended to assess? (For example, if an outcome includes
higher order or critical thinking skills--such as problem solving or synthesis--does the
assessment measure them?)
13. Does the assessment actually require students to use higher-level knowledge or skills,
or can students simply respond from memory without having to think?
 
 
Meaningfulness
Focus: The relevance of the assessment in the minds of students
14. Are assessment items or tasks meaningful to students?
15. Is the assessment relevant to problems students will encounter again in school, work,
or daily living?
16. Does the assessment provide students with worthwhile or meaningful experiences?
 
 
 
Fairness
Focus: Fairness to members of all groups
17. Is the assessment biased against students who are members of various racial, ethnic,
and gender groups or students with disabilities? Does it contain stereotypes of any
groups?
18. Do students of similar ability, regardless of group membership, score the same?
Academic Effectiveness and Assessment
Page 22
 
 
St. Petersburg College
Cost and Efficiency
Focus: The practicality or feasibility of an assessment
 
 
19. Is the assessment a reasonable burden on teachers, instructional time, and finances?
20. Is resulting information worth the required costs in money, time, and effort?
Yes / No
Reliability
1. Are the score categories well defined?
2. Are the differences between the score categories clear?
3. Would two independent raters arrive at the same score for a given student response
based on the scoring rubric?
 
 
 
Linn, R.L., E.L. Baker, and S.B. Dunbar. (1991). Complex, performance-based assessment: Expectations and validation
criteria. Educational Researcher 20(8), l5-23.
Please provide feedback on the ARC Scenario Assignment:
Academic Effectiveness and Assessment
Page 23
St. Petersburg College
Appendix F: ARC Workshop Evaluation
ARC Scoring Workshop 2011 Evaluation
This survey is anonymous, so feel free to be candid. Positive feedback, as well as
constructive criticism, is appreciated.
1. What is your position? (Check all that apply)
Full Time Faculty
Part Time Instructor
Staff/Administrator
2. What was your major purpose for participating? (Check all that apply)
Better understanding of critical thinking
Better understanding of assessment of critical thinking
Interaction with faculty in other disciplines
Better understanding of rubrics for grading
Classroom strategies for promoting critical thinking
Other
3. The facilitator was knowledgeable and well-organized.
Strongly Agree
Agree
Disagree
Strongly Disagree
n/a
Academic Effectiveness and Assessment
Page 24
St. Petersburg College
4. The information will be useful in my classroom or work.
Strongly Agree
Agree
Disagree
Strongly Disagree
n/a
5. Participation in the ARC Scoring Workshop will impact my teaching.
Strongly Agree
Agree
Disagree
Strongly Disagree
n/a
6. After attending this workshop, I have a better understanding of the ARC
assessment process.
Strongly Agree
Agree
Disagree
Strongly Disagree
n/a
7. After attending this workshop, I feel I can build a new scenario or improve upon
an existing scenario that is closely aligned with the ARC rubric.
Strongly Agree
Agree
Disagree
Strongly Disagree
n/a
Academic Effectiveness and Assessment
Page 25
St. Petersburg College
8. Overall, what was the most important or valuable thing you learned?
9. What would you like to know more about (relating to the ARC)?
10. I would recommend others attend the ARC Scoring Workshop.
Strongly Agree
Agree
Disagree
Strongly Disagree
n/a
11. What are your recommendations for improving future ARC Scoring Workshops?
12. What aspect(s) of SPC's definition of critical thinking do you find the most
difficult to assess? (Check all that apply)
Communication
Problem Solving
Evaluation
Analysis
Synthesis
Reflection
Academic Effectiveness and Assessment
Page 26
St. Petersburg College
If you have any questions or comments about this survey, please contact Janice Thiel at
Thiel.Janice@SPCollege.edu or call (727) 341-3110
Academic Effectiveness and Assessment
Page 27
St. Petersburg College
Appendix G: Assessment Results by Ethics Section
Table 5
Results for Section 893
Total
Mean
Standard
Deviation
Minimum
Maximum
Identification [Communication]
27
2.3
1.2
0.0
4.0
Research [Synthesis]
24
2.5
0.8
1.0
3.5
Analysis [Analysis]
25
2.3
0.7
1.5
4.0
Decision Making [Problem Solving]
24
2.4
0.9
1.0
4.0
Evaluation [Evaluation]
24
2.0
1.0
0.0
4.0
Reflection[Reflection]
23
1.8
1.0
0.0
3.5
Total
27
12.2
4.8
4.5
21.5
Total
Mean
Standard
Deviation
Minimum
Maximum
Identification [Communication]
8
2.8
0.8
1.5
4.0
Research [Synthesis]
8
2.9
1.0
1.5
4.0
Analysis [Analysis]
8
2.9
0.7
1.5
4.0
Decision Making [Problem Solving]
8
2.2
0.7
1.5
3.5
Evaluation [Evaluation]
8
2.0
0.6
1.3
3.5
Reflection[Reflection]
8
2.4
0.9
1.0
3.5
Total
8
15.1
3.7
11.0
22.5
Competency
Table 6
Results for Section 1355
Competency
Academic Effectiveness and Assessment
Page 28
St. Petersburg College
Table 7
Results for Section 1381
Total
Mean
Standard
Deviation
Minimum
Maximum
Identification [Communication]
24
3.0
0.8
1.0
4.0
Research [Synthesis]
24
2.7
0.9
0.5
4.0
Analysis [Analysis]
24
2.4
0.9
0.0
4.0
Decision Making [Problem Solving]
24
2.7
0.9
1.0
4.0
Evaluation [Evaluation]
24
2.2
1.1
0.5
4.0
Reflection[Reflection]
23
2.5
1.3
0.0
4.0
Total
24
15.4
4.8
5.5
24.0
Total
Mean
Standard
Deviation
Minimum
Maximum
Identification [Communication]
12
2.4
1.2
0.5
4.0
Research [Synthesis]
12
2.2
0.7
1.5
3.5
Analysis [Analysis]
12
2.6
0.6
2.0
3.5
Decision Making [Problem Solving]
12
2.2
0.4
1.5
3.0
Evaluation [Evaluation]
12
2.0
0.6
1.0
2.5
Reflection[Reflection]
10
2.3
1.0
1.0
3.5
Total
12
13.2
3.0
8.0
19.5
Competency
Table 8
Results for Section 1538
Competency
Academic Effectiveness and Assessment
Page 29
St. Petersburg College
Table 9
Results for Section 2521
Total
Mean
Standard
Deviation
Minimum
Maximum
Identification [Communication]
11
2.8
1.0
1.0
4.0
Research [Synthesis]
11
2.7
1.0
0.5
4.0
Analysis [Analysis]
9
2.9
1.0
1.0
4.0
Decision Making [Problem Solving]
9
2.2
1.1
1.0
4.0
Evaluation [Evaluation]
10
2.2
1.1
1.0
4.0
Reflection[Reflection]
9
1.7
0.9
0.5
3.0
Total
11
13.1
5.9
3.5
20.5
Total
Mean
Standard
Deviation
Minimum
Maximum
Identification [Communication]
23
1.7
0.9
0.5
3.5
Research [Synthesis]
22
2.6
0.8
1.0
4.0
Analysis [Analysis]
23
2.3
1.0
1.0
4.0
Decision Making [Problem Solving]
23
2.4
0.9
1.0
4.0
Evaluation [Evaluation]
22
2.1
0.7
1.0
3.0
Reflection[Reflection]
23
2.0
1.0
0.0
4.0
Total
23
13.0
4.4
5.5
20.0
Competency
Table 10
Results for Section 4553
Competency
Academic Effectiveness and Assessment
Page 30
Download