2014-2015 Assessment Report

advertisement
GE Assessment Report 2012-15
History
The discussion about GE assessment started with the announcement of the Summer
Institute organized by the Institute for Teaching and learning Office of the Chancellor
California State University in Summer 2012. The team from CSUN comprised of Bonnie
Paller, Beth Lasky, Sharon Klein, Eric Garcia, and Anu Thakur. The team attended a
variety of sessions focused on assessment strategies that can be implemented for GE –
scalability, rubrics, signature assignments, etc. (see attached CSUN 2012 Summer
Institute Report written by Beth Lasky – Appendix A).
After the institute Anu wrote a draft strategic plan for GE assessment and developed a
timeline (much of which was discussed with Mary Allen – highly revered in assessment
circles – at the summer institute). In addition, the team agreed to develop a set of
learning outcomes for the GE program. This was done for two reasons – 1) if we were to
take the SLOs from each GE section and consider that list as SLOs for the program, we
would have over 60 SLOs for the program; and 2) the overall program’s SLOs should
ideally be more overarching than a compilation from SLOs for each section. The team
agreed, again after consultation with some experts we met at the summer institute, to
compile SLOs for the GE program (GELOs) based on the goals for each section. Section
goals were rewritten as SLOs. Several revisions were made within the team, mostly
before the start of Fall 2012. These were presented to the VP of undergraduate studies
and the GE Council.
Regarding the timeline developed for GE sections, we attempted to get a copy of the GE
recertification cycle but were only able to identify that the GE Comparative Cultural
Studies (CCS) section was scheduled for recertification in 2013-14. This is why the CCS
section was the first selected to participate in the assessment.
A questionnaire was developed to invite participation from faculty teaching within GE
sections for the assessment.
The Process
This is an outline of the process Beth Lasky and Anu Thakur used to coordinate the GE
CCS and Critical Thinking assessment groups. Specifics of the process within each section
will be detailed later in this report.
Step 1: Email faculty questionnaire to department chairs with request to forward to
faculty teaching a course in the identified GE section
Step 2: Beth Lasky receives and reviews faculty responses to the questionnaire and
identifies a diverse group of faculty (different colleges, different faculty ranks) from the
faculty who expressed interest in participating in the assessment
Step 3: Beth Lasky and Anu Thakur with identified faculty group for an informal “faculty
development” session, essentially facilitating a discussion about strategies each faculty
implement in their courses toward teaching and assessment/evaluation of student
learning outcomes associated with the GE section in which they teach. The session also
included discussion about possible assessment strategies the group could consider as
they plan their coordinated assessment of the GELOs in their section.
Step 4: Teams agreed on an assessment strategy to use (CCS – signature assignment and
analytic rubric, CT – pre- and post-testing evaluated using a holistic rubric)
Step 5: Regular meetings of teams to discuss progress and implementation of
assessment strategy
Step 6: Dissemination of findings and experiences
Comparative Cultural Studies
Ashley Samson, Kinesiology
Gigi Hessamian, Communication Studies
Mintesnot Woldeamanuel, Urban Studies and Planning
Nina Golden, Business Law
Beto Gutierrez, Chicana/o Studies (did not continue after 2012-13)
Direct Assessment
At the first meeting of this group it was determined that all faculty use some form of a
reflection essay in their course which could be used as a signature assignment. Faculty
agreed that they could develop one rubric which could be used to evaluate assignments
from all their classes. This rubric is attached (Appendix B). The faculty met several times
to create and norm this rubric, which was finalized in March 2013. At the April 2013
meeting faculty brought 15 assignments from their respective classes, out of which 8
were selected randomly from each class. The set of 40 assignments was evaluated using
the rubric – each evaluated by 2 members. Anu Thakur consolidated the ratings on a
spreadsheet and ran basic data analysis.
Unfortunately, there was a high level of discrepancy between the ratings from across
the group members because of which no substantial conclusions could be drawn. This
was attributed largely to the time lapse between the norming session and the actual
evaluation of assignments using the rubric (about three weeks).
The team met again in Fall 2014 to once again norm and apply the rubric, with the goal
of completing the process in one meeting. After making minor revisions to the rubric in
the norming session the team proceeded to rank 8 randomly selected papers from 15
papers brought by each of now 4 group members. The 32 papers were ranked
(unfortunately not in the same session). Once again the data analysis revealed high
levels of discrepancy between ratings from the two members evaluating each paper.
Indirect Assessment
The team also designed a pre- and post-survey to assess student motivation in the GE
section. This indirect method of assessment was implemented with much input from
WASC workshops which Beth Lasky and Anu Thakur attended. Using the IDEA student
motivation survey, the faculty developed a short pre-survey to be administered in their
classes at the start of the semester essentially asking students for their motivation for
selecting this GE course in the section and their expectations of the course. The postsurvey administered at the end of the semester asked students about how their
experience with the course material aligned with their expectations and how the course
impacted their perceptions about the topic (see Appendix C). The survey was
administered in Spring 2014 and results analyzed in Spring 2015. The preliminary data
analysis from the survey is included in Appendix D. There are some significant findings
to note from this survey about student motivation as well as the impact of taking the
course on student attitudes about the overarching concept of Comparative Cultural
Studies.
Significant Findings
The data from rubric rankings, although lacking heavily in interrater reliability, indicates
that students perform well on the five criteria – reflection, empathy, observation,
knowledge, writing and organization (a significant number of the ratings are between 3
or 4 on a scale of 1-4). In the absence of interrater reliability the team never reached the
stage of identifying benchmarks and evaluating whether they are satisfied with
students’ performance on each criterion.
The survey of students regarding their motivation for taking the GE course provided
good insights into student opinions and attitudes. The start-of-semester survey,
although clearly has many idealistic responses from students, gives insights about the
reasons students select a particular GE course to satisfy the requirement, and the endof-semester survey explores students’ opinions of the course and its content. Of note is
that over 84% students indicated they were glad they took the course and an even
greater number noted that they have developed greater appreciation for people from
different cultures.
Dissemination and Take-aways
The team presented their direct assessment work at the WASC ARC 2014 held in Los
Angeles, CA; GE Rubric/Assessment at CSUN: Starting Small and Scaling Up.
This was also presented at the CSUN Annual Assessment Retreat in May 2013.
In Fall 2013, the team focused on disseminating the rubric they had created across
campus. Beth Lasky approached all department chairs across campus to request
time at their faculty meetings to present this rubric. The idea was that if the rubric
could be widely adopted for the evaluation of reflection assignments across campus,
a consolidation of the data would provide excellent insights into student learning on
the GELOs for the CCS section. While the rubric was very well received by most
faculty across campus, repeated efforts to find out if anyone was using it in their
courses were futile.
The team has adopted the rubric for their respective courses and found the process
very beneficial in their teaching. Some comments from the team:
“As a lecturer here at CSUN, participation in this assessment committee was a
unique experience. Beyond giving me a peek into administrative processes, it
afforded me great insight into the function and implementation of learning
outcomes. I have come through the process an improved teacher and grader.” - GiGi
Hessamian
“As a result of being a part of the committee, I have changed how I teach in all of my
classes. Not only do I now use rubrics for all papers, I post the rubrics so that the
students know exactly what I’m looking for on each assignment. I have found the
experience of creating the rubric invaluable and look forward to using it as a
jumping off point for future research.” – Nina Golden
“Participating the GE Assessment committee that was in charge of assessing the
“comparative cultures” SLO’s greatly enhanced my teaching practices, as well as the
way that I viewed the assessment process in general. I began to utilize the
knowledge that I gained from the process in all of my courses, not just the ones that
are GE. As a result, I am more efficient, structured, and thorough (I feel) in my
course requirements and making sure that they line up more congruently with the
SLO’s in order to really be able to investigate student learning. The students have
also benefited from having a structured rubric in place from the start, so that they
know what they will be evaluated on.” – Ashley Samson
Personal Notes (Anu Thakur)
This was an excellent pilot for the GE assessment process with much to be learned
in terms of organizing, coordinating, strategizing, and implemented assessment.
This group did not yield statistically strong data to draw significant conclusions
regarding student learning. However, the data from the direct and indirect
assessment sheds some light on the effectiveness of the GE CCS section in teaching
the stated SLOs for the section.
Personally I found that the biggest shortfall in the process was faculty
understanding of the process for assessment purposes, largely due to
miscommunication from the co-coordinators – one focusing on assessment, the
other on rubric. This was resolved to a great extent for the next group (critical
thinking) and the focus laid on designing and implementing the process with
assessment as the goal.
Critical Thinking
James Findlay, Religious Studies
Weimin Sun, Philosophy
Aimee Glocke, Africana Studies
Jorge Munoz, Philosophy
With Dr. Bonnie Paller establishing critical thinking assessment efforts in many
sectors across campus, the Critical Thinking GE section became the next to be
assessed. Following the same process as with CCS to form a team of four faculty
teaching in the CT section of GE.
The first meeting of the team in April 2014 focused on discussing how each faculty
interprets and teaches critical thinking through their GE course. Faculty shared the
foundations and skills they focus on in their courses, and identified the following 5
principals and the CT-GELOs these principals aligned with:
Introduce and Define--SLO #1
Recognize--SLO #3 & #5
Analyze--SLO #3
Evaluate--SLO #1, #4, & #5
Apply- to current issues and problem solving--SLO #2 & #3
At the last meeting in Spring 2014, the team developed prompts for a diagnostic test
to be administered at the start of the semester. Each faculty would use a passage
relevant to the subject matter but they would all use the same prompts. At the start
of Fall 2014, the team administered the diagnostic test in their classes. The team
met in October 2014 to discuss findings from the diagnostic test and establish
prompts for an end-of-semester assessment. The meeting, however, focused on
discussing challenges faced when administering the agreed on diagnostic and on
clearing discrepancies in group members’ understanding of the process. The group
clarified the diagnostic prompt and developed the end-of-semester assessment
prompt to be administered in Spring 2015. This was especially so because one of the
team members administered a different prompt from all others due to some
miscommunication. Weimin Sun’s report on his assessment from Fall 2014 is
attached (see Appendix E).
Pre-test prompts:
1) Identify premises and the main conclusion.
2) Show how the premise (or the premises) supports the main conclusion in the
passage.
3) Please state whether the premise (or premises) provides coherent support for
the main conclusion and why?
4) Are the premises reasonably accepted and why?
Post-test prompts: (change valid in question 3, and sound for question 4)
1) Identify premises and the main conclusion.
2) Show how the premise (or the premises) supports the main conclusion in the
passage.
3) Please state whether the argument is Valid, and why.
4) Please state whether the argument is Sound, and why.
All team members administered the tests in their courses (or requested other
instructors to administer in their sections) in Spring 2015. Jorge Munoz had
requested another faculty member to administer the tests but only the pre-test was
administered. The meeting at the end of the semester focused on a discussion of
faculty’s preliminary responses to the data from the pre- and post-tests.
The rubric used for rating students’ responses is below:
Missing (0)
Partial (1)
Completed (2)
Introduce/Define
Recognize/Identify
Analyze
Evaluate
- Strength of
argument
- Quality of
premises
Significant Findings
Weimin Sun found that in his course the average scores on each of the four components
went up. However, there was still much room for improvement on all aspects, especially
“analyze”
“I have found that in general the critical thinking skills have improved across the board,
and the survey questions are appropriate for assessing critical thinking skills. From our
discussions in this group, I find that these assessment tools can be applied effectively in
different disciplines, and can at least serve as a good template for future evaluations.” –
Weimin Sun
Aimee Glocke compiled a thorough report on the assessment from her course (see
Appendix F). Here is an excerpt from her report summarizing the findings from the data:
“Overall, I had 41 students who took both the pre-test and the post-test; 10 students
who only took the pre-test; and 14 students who only took the post-test. After
examining the scores of the 42 students who took both the pre and the post-tests, their
scores did improve. But, when it came to answering the questions on the premise being
reasonably accepted and the validity and soundness of the argument, the students still
answered based on their own personal experiences regardless of what was taught to
them in class. Consequently, is appears as if the students believe that an argument is
sound and valid based on what they have experienced in life. This made deciding
whether the answers were right or wrong difficult since many of the students made very
convincing arguments for both sides.”
Other faculty have not completed analysis of their data and we still need to consolidate
all data and identify overall trends. At first glance it appears that students are making
progress over the course of the semester on each of the four aspects of critical thinking.
Personal Thoughts
While we have not run any statistics on this data to analyze whether the improvements
that faculty claim are statistically significant. It appears that there is room for
improvement as many students are not achieving the highest ranking. The next step of
the process would be to consolidate data from all faculty and as a group identify
benchmarks for acceptable percentage of students achieving certain rankings on the 0-2
scale. As with the CCS group, this section should ideally incorporate an indirect
assessment like a student motivation survey.
The challenges that were faced in the first year of administering the tests can be
overcome with some pre-planning. The greatest challenge was that faculty who were
part of the group were not always teaching the course in their department and had to
approach a colleague to administer the tests.
Future Plans for GE Assessment
This group of GE-CCS assessment is wrapping up its work and attempting to publish a
manuscript on their experience. The GE-CT assessment group planned to continue their
work, collecting another semester of data and developing an indirect assessment tool
for their section.
As coordinator of this group, I have learned a lot in terms of assessment strategies and
communicating assessment intents to faculty. The processes for CCS and CT were widely
different and provided insights into the administration of each assessment strategy. The
lessons learned from the CCS and CT assessment processes can help streamline the
assessment of next GE sections we undertake.
There have been several debates about the next section which should be assessed. One
thought was to work on a section like Lifelong Learning or Natural Sciences, which unlike
the core competencies, do not otherwise get assessed within majors. On the other
hand, the need to continue assessment of core competencies within GE remains a
priority. CT has initiated the Basic Skills section within GE and ideally we would complete
this section by taking on Oral and/or written communication and Quantitative
Reasoning. This being the first listed section of GE can set us on the path to go through
all ten sections in the order they are listed in the catalog.
Appendix A:
2012 ITL Summer Institute Report Template Please replace all red text and
change all text to black before submission to wtikkanen@calstate.edu
Campus: CSU Northridge
Team Lead Submitting Report: Beth Lasky
What were your goals for the workshop?
These are from the applications we completed before the institute.
1. We hope to investigate and achieve alignment between GE and the new Paths
initiative
2. Start establishing GE assessment
3. An assessment plan for GE which coheres with GE Paths
4. Understand and identify best practices to include at our campus for GE
assessment
5. Increase our knowledge of developing and possibly implementing assessment
techniques for future assessment of the campus programs
6. Developing interdisciplinary initiatives and helping faculty make them work
7. Methods in assessing GE
8. How to build a long-term process
9. Developing direct student surveys to assess GE outcomes
10. Assessment evaluation
Which goal(s) do you believe that you and your team achieved?
2. Start establishing GE assessment
Summarize plans that your team made for implementing some of the tools
described in the three workshops.
As a result of the Keynote, Monday evening, our team decided our campus needs
some overall GE Program goals.
As a result of Mary Allen’s presentation our team going to try to develop a matrix
with the campus Fundamental Learning Competencies, GE SLOs, and which courses
these are found.
As a result of Marc Chun’s workshop, our team’s assessment representatives will be
drafting a GE assessment proposal for the rest of the team to consider.
As a result of the ePortfolio session, we are considering doing a campus needs
assessment on digital portfolios and possibly designing a small scale ePortfolio with
on small program.
Describe obstacles your team sees hindering implementation of your plan and
how they may be overcome/addressed:
Resources
Faculty and staff who do not see the benefit of ePortfolios
The overwhelming number of courses in which GE SLOs are found. Most of these
courses are taught by lecturers.
Appendix B: COMPARATIVE CULTURAL STUDIES GRADING RUBRIC FOR REFLECTION ASSIGNMENTS
Developed and Normed:
Ashley Samson, Kinesiology
Beto Gutierrez, Chicana/o Studies
Gigi Hessamian, Communication Studies
Mintesnot Woldeamanuel, Urban Studies and Planning
Nina Golden, Business Law
Assignment
Excellent
Component
4
Content
Reflection
Identifies own
Articulates awareness of experiences, biases or
how own experiences
changing perceptions
have shaped perception
and gives significant
towards other cultures
supportive evidence.
and/or people with
different backgrounds
Empathy/Openness
Effectively discusses the
Recognizes the feelings
feelings and views of
and views of others from others from diverse
diverse backgrounds
backgrounds
Observations
Observations are
Descriptions about
specific and objective
diverse group or
with significant
contexts
supporting data or facts
Application of Knowledge
Effectively applies
Links course concepts to course concepts to
personal views and
personal views and
experiences
experiences
Style
Writing & Organization
Paper has excellent
Writing is free from
grammar and
grammatical errors.
organization
Paper is well organized
and flows well
Can be used to assess the CCS SLOs.
For questions or comments please contact:
Beth Lasky beth.lasky@csun.edu or
Anu Thakur anubhuti.thakur@csun.edu
Group Coordinators:
Beth Lasky, Director GE
Anu Thakur, Coordinator, Academic Assessment
Good
3
Identifies own
experiences, biases, or
changing perceptions
and provides some
supportive evidence
Adequate
2
Identifies either own
experiences, or bias, or
changing perceptions,
but not enough
supportive evidence.
Poor
1
Does not identify own
experiences, biases, or
changing perceptions and
provides no support
Discusses the feelings
and views of others
from diverse
backgrounds
Good observations
that are specific with
good supporting data
or facts
Good application of
course concepts to
personal views and
experiences
Paper has good
grammar and
organization
Adequately discusses the
feelings and views of
others from diverse
backgrounds
Observations are general
with adequate
supporting data or facts
Does not recognize the
feelings and views of
others from diverse
backgrounds
Lacking in observations
and/or sufficient data.
Adequately applies
course concepts to
personal views and
experiences
Paper has adequate
grammar and
organization
Does not apply course
concepts to personal
views and experiences
Paper has many
grammatical errors and
poor organization
Appendix C:
Possible Questions
Scale:
Strongly
Agree
5
Agree
4
Neither Agree
or Disagree
3
Disagree
2
Strongly
Disagree
1
Questions for Beginning of Semester
1.
I am taking this course to gain knowledge and fundamentals on the topic
(terminology, theories, methods, trends).
2.
I am taking this course to develop specific skills, competencies, and points of
view needed by professionals in the field most closely related to this course.
3.
I am taking this course to gain a broader understanding and appreciation of
cultures and/or people with different backgrounds.
4.
From this course, I expect to analyze and critically evaluate diverse ideas,
arguments and points of view.
5.
I am interested in taking this course, because of the instructor.
6.
Based on the course title/description, this course is relevant to my course of
study.
7.
Based on my current and existing knowledge in this subject, I feel prepared
to do well in this course.
8.
I am taking this course because I am genuinely interested in the topic/subject
of the course .
9.
I am taking this course because it fit most conveniently in my schedule.
Questions for Ending of Semester
1.
As a result of this course, my interest in this department or field of study has
increased.
2.
As a result of this course, I have increased my knowledge in this subject.
3.
My background prepared me well for this course’s requirements.
4.
As a result of taking this course, I have more positive feelings toward this
field of study.
5.
did.
Regardless of why I originally decided to take the course, I am happy that I
6.
As a result of this course, I have developed specific skills, competencies, and
points of view needed by professionals in the field.
7.
As a result of this course I have gained a broader understanding and
appreciation of cultures and/or people with different backgrounds.
8.
As a result of taking this course, I have more positive feelings toward the field
of study.
9.
As a result of taking this course, I am contemplating changing my major.
10.
Please list 3 specific ideas or competencies that you have learned in this
course that you will take back to real life applications.
Appendix D:
Beginning of the Semester
End of Semester
Appendix E:
Dr. Weimin Sun
Philosophy
12/8/2014
Report on Critical Thinking Assessment
I did both the pretest and the posttest with my online Phil 200 class in Fall 2014,
which is a critical reasoning course with about 36 students. The pretest was done
during the first week of the semester as a part of an optional assignment. The
posttest was integrated into the midterm. The detailed questions are posted below.
I.
Questions
Pretest questions:
With regard to the fitness of women, not only to participate in elections, but
themselves to hold offices or practice professions involving important public
responsibilities; I have already observed that this consideration is not essential to
the practical question in dispute: since any woman who succeeds in an open
profession, proves by that very fact that she is qualified for it. And in the case of
public offices, if the political system of the country is such as to exclude unfit men, it
will equally exclude unfit women: while if it is not, there is no additional evil in the
fact that the unfit person whom it admits may be either women or men. As long
therefore as it is acknowledged that even a few women may be fit for these duties,
the laws which shut the door on those exceptions cannot be justified by any opinion
which can be held respecting the capacities of women in general. (John Stuart Mill,
On the Subjection of Women)
From the above passage, 1) please identify the Premises and the Main Conclusion of
the argument; 2) Please explain how the premises support the conclusion?
Embryonic stem cell research is like Nazi research. It does not respect the sanctity
of life of the fetus, like the Nazis did not respect the life of the people they
experimented on. Next thing we know we will be using full grown fetuses for
research or worse. Such blatant disregard for human life was immoral for the Nazis
and it should be immoral for stem cell research. Therefore, we should not engage in
embryonic stem cell research.
Please state, from the above passage, 3) whether the argument is reasonable (or at
least whether the premises provide coherent support for the conclusion) and why?
4) Are the premises reasonably accepted and why?
Post-test questions:
1. What is true regarding the following argument?
With regard to the fitness of women, not only to participate in elections,
but themselves to hold offices or practice professions involving important
public responsibilities; I have already observed that this consideration is
not essential to the practical question in dispute: since any woman who
succeeds in an open profession, proves by that very fact that she is
qualified for it. And in the case of public offices, if the political system of
the country is such as to exclude unfit men, it will equally exclude unfit
women: while if it is not, there is no additional evil in the fact that the
unfit person whom it admits may be either women or men. As long
therefore as it is acknowledged that even a few women may be fit for
these duties, the laws which shut the door on those exceptions cannot be
justified by any opinion which can be held respecting the capacities of
women in general. (John Stuart Mill, On the Subjection of Women)
How many statements are there in this argument?
How many argument indicators are there in this
argument?
Answer 1
Choose...
Answer 2
Choose...
2. What is true regarding the following argument?
With regard to the fitness of women, not only to participate in elections,
but themselves to hold offices or practice professions involving important
public responsibilities; I have already observed that this consideration is
not essential to the practical question in dispute: since any woman who
succeeds in an open profession, proves by that very fact that she is
qualified for it. And in the case of public offices, if the political system of
the country is such as to exclude unfit men, it will equally exclude unfit
women: while if it is not, there is no additional evil in the fact that the
unfit person whom it admits may be either women or men. As long
therefore as it is acknowledged that even a few women may be fit for
these duties, the laws which shut the door on those exceptions cannot be
justified by any opinion which can be held respecting the capacities of
women in general. (John Stuart Mill, On the Subjection of Women)
What does the statement "any
woman who succeeds in an open
profession, proves by that very
fact that she is qualified for it"
directly support?
Answer 1
What is the main conclusion of the
argument?
Answer 2
Choose...
Choose...
3. What is the correct assessment regarding the following argument?
Embryonic stem cell research is like Nazi research. It does not respect
the sanctity of life of the fetus, like the Nazis did not respect the life of
the people they experimented on. Next thing we know we will be using
full grown fetuses for research or worse. Such blatant disregard for
human life was immoral for the Nazis and it should be immoral for stem
cell research. Therefore, we should not engage in embryonic stem cell
research.
Select one:
a. This is not a deductively valid argument, but it is an inductively strong
argument.
b. This is a reasonable argument.
c. This is a deductively valid argument.
d. This is a fallacious argument as it commits one of the common fallacies.
4. What is true regarding the following argument?
Embryonic stem cell research is like Nazi research. It does not respect
the sanctity of life of the fetus, like the Nazis did not respect the life of
the people they experimented on. Next thing we know we will be using
full grown fetuses for research or worse. Such blatant disregard for
human life was immoral for the Nazis and it should be immoral for stem
cell research. Therefore, we should not engage in embryonic stem cell
research.
What is the main
conclusion of this
argument?
Which one of the
premises used in
this argument is
the least
acceptable?
Answer 1
Choose...
Answer 2
Choose...
II.
Results
Pretest results
Q. 1
/10.00
Students
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
Average %
Q. 2
/10.00
0
0
0 1
1
1
1
1
2
2
3
3
4
4
4
4
4
5
5
5
7
8
8
9
9
10
38.8
2
2
2
1
1
2
4
3
3
1
10
1
1
1
4
10
4
5
2
3
2
5
3
3
4
30.4
Post Test results
Q. 1a
Q. 1b
Q. 2a
Q. 2b
Students
/4.55
/4.55
/2.27
/4.55
1
0
0
2.27
4.55
2
0
0
2.27
0
3
2.27
4.55
0
2.27
4
2.27
0
2.27
4.55
5
0
4.55
2.27
2.27
6
2.27
7
0
8
0
9
2.27
10
2.27
11
0
12
2.27
13
0
14
0
15
0
16
0
17
0
18
2.27
19
0
20
0
21
0
22
2.27
23
2.27
24
4.55
25
2.27
26
0
27
0
28
2.27
29
2.27
30
2.27
31
0
32
2.27
33
0
34
4.55
35
2.27
36
1.23
Average %
27.03297
4.55
2.27
2.27
0
2.27
2.27
2.27
0
0
0
4.55
2.27
0
0
2.27
2.27
4.55
0
0
0
2.27
2.27
0
0
4.55
2.27
4.55
0
4.55
0
2.27
2.27
2.27
0
4.55
2.27
2.27
2.27
2.27
2.27
0
2.27
2.27
0
4.55
2.27
4.55
2.27
2.27
2.27
2.27
2.27
2.27
2.27
2.27
2.27
4.55
2.27
2.27
2.27
2.53
1.49
55.6044 65.63877
2.27
2.27
0
2.27
2.27
2.27
4.55
2.27
2.27
0
2.27
2.27
2.27
4.55
4.55
2.27
2.27
2.27
4.55
2.27
2.27
2.27
4.55
4.55
2.27
4.55
2.27
4.55
2.27
2.27
2.73
60
The student numbers are random and don’t correspond to the students in the
pretest.
III.
Conclusion
First I need to point out that though the pretest and the post-test questions are
the same, they are examined in different ways. As the pretest shows, the questions
on the pretest are only examined in a rudimentary way, but in the post-tests, the
questions were examined in a deeper way that requires a fair amount of knowledge
about logic and critical thinking.
The result still shows that the student did better on posttest than what they did
on the pretest, though the numbers can be better.
With the design of the question, the second part of the first question on the
pretest (Please explain how the premises support the conclusion) is vague and has
been easily confused by the students. The intention of the question is to ask the
student to recognize and identify the structure of the argument, yet the results show
that the students were looking at whether the premises provide reasonable support
to the conclusion, which will be the question that will be asked later (#3 on the
pretest).
There are no good general alternative to question 1b (#2 on the pretest), though,
since it seems to depend on the particular question. Maybe something like this:
“What is the statement that directly supports the conclusion?” Also, we can
consider deleting the question, as the first question is often enough to tell us the
structure of a relatively simple argument.
Appendix F:
Background information on the pre-test: Since I was not teaching AFRS 204 this
semester, I did this assessment in one of the adjunct faculty’s Race and Critical Thinking
courses, which had over 65 students enrolled, during the second week of classes. Here is
the question I asked the students:
Pre-Test: AFRS 204: Race and Critical Thinking Assessment
Bobby E. Wright (an African/Black Psychologist) developed a theory in 1984 entitled the
Psychopathic Racial Personality that argues “in their relationship with the Black race,
Europeans (Whites) are psychopaths and their behavior reflects an underlying
biologically transmitted proclivity with roots deep in their evolutionary history. The
psychopath is an individual who is constantly in conflict with other persons or groups. He
is unable to experience guilt, is completely selfish and callous, and has a total disregard
for the rights of others” (Wright 2). Wright continues to explain that the psychopath
knows right from wrong, but chooses to ignore it, and that psychopaths actually function
very well in society. He then concludes his short chapter with several examples of how
Europeans have treated African people throughout history via racism, discrimination,
white supremacy, terrorism (i.e with the KKK), lynching, enslavement, colonization,
colonialism, neo-colonialism, apartheid, segregation, etc.
Please identify the premise (or premises) and the main conclusion. Show how the premise
(or premises) support the main conclusion in the passage. Please state whether the
premise (or premises) provide coherent support for the main conclusion and why? Are
the premises reasonably accepted and why?
Coding: Column 1: Were they able to identify the premise and the main conclusion?
Column 2: Were they able to show how the premise supports the main conclusion?
Column 3: Were they able to state whether the premise provides coherent support for the
main conclusion and why?
Column 4: Were they able to explain if the premise was reasonably accepted and why?
Results: I gave out 52 assessment questions and received 51 back with their names (one
was missing a name). Although most of the students could identify the premise and the
conclusion, many did not answer the last two questions on whether the premise provided
coherent support for the main conclusion and if the premise was reasonably accepted. I
assume they did not answer these two questions because they were not introduced to this
information yet.
Since this question was on race, the students’ answers were very passionate and personal.
I received several responses that read “sort of,” “somewhat,” “yes, even though I do not
agree” for the coherent support and if the premise is reasonably accepted. For the
students who answered these two questions, there was “sort of” enough support and it
was “somewhat” reasonably accepted. Many students also said this theory was
interesting, but needed more evidence and discussion. One student wrote “I think this
paper is ignorant.”
The students also used their personal experiences with race in America to guide their
answers. Several students made fairly convincing arguments for why there was and was
not coherent support for the main conclusion, and for why the premise was and was not
reasonably accepted. This made me think about whether or not these last two questions
have a definitive answer, or is it in all about how they support themselves in why they
believe this?
Background information on the post-test: I was able to visit the same Race and Critical
Thinking course during the 14th week of classes. Here is the question I asked the students:
Post Test: AFRS 204: Race and Critical Thinking
Bobby E. Wright (an African/Black Psychologist) developed a theory in 1984 entitled the
Psychopathic Racial Personality that argues “in their relationship with the Black race,
Europeans (Whites) are psychopaths and their behavior reflects an underlying
biologically transmitted proclivity with roots deep in their evolutionary history. The
psychopath is an individual who is constantly in conflict with other persons or groups. He
is unable to experience guilt, is completely selfish and callous, and has a total disregard
for the rights of others” (Wright 2). Wright continues to explain that the psychopath
knows right from wrong, but chooses to ignore it, and that psychopaths actually function
very well in society. He then concludes his short chapter with several examples of how
Europeans have treated African people throughout history via racism, discrimination,
white supremacy, terrorism (i.e with the KKK), lynching, enslavement, colonization,
colonialism, neo-colonialism, apartheid, segregation, etc.
Please identify the premise (or premises) and the main conclusion. Show how the premise
(or premises) support the main conclusion in the passage. Please state whether the
argument is valid and why. Please state whether the argument is sound and why.
Coding: Column 5: Were they able to identify the premise and the main conclusion?
Column 6: Were they able to show how the premise supports the main conclusion?
Column 7: Were they able to state whether the argument was valid and why?
Column 8: Were they able to state if the argument was sound and why?
Results: I gave out 55 assessment questions and received 54 back with their names (one
was missing a name). Again, most of the students could identify the premise and the
conclusion and state whether the argument was valid; but, most did not discuss whether
or not the argument was sound. This made me think they did not answer this question
because they did not know what this was. One student even asked me what it meant for
an argument to be valid and sound while I was administering the assessment. I also
received several answers that read “I agree with the argument;” “I disagree with the
argument;” and “It is valid, but I don’t agree.”
Comparing the Data: Overall, I had 41 students who took both the pre-test and the posttest; 10 students who only took the pre-test; and 14 students who only took the post-test.
After examining the scores of the 42 students who took both the pre and the post-tests,
their scores did improve. But, when it came to answering the questions on the premise
being reasonably accepted and the validity and soundness of the argument, the students
still answered based on their own personal experiences regardless of what was taught to
them in class. Consequently, is appears as if the students believe that an argument is
sound and valid based on what they have experienced in life. This made deciding whether
the answers were right or wrong difficult since many of the students made very
convincing arguments for both sides.
Suggestions: If and when we administer this assessment again, can we add at the bottom
of the assessment blanks for them to fill in? I wonder if this would ensure they answered
all parts of the question. For example:
The premise is:________________________________________
The conclusion is: ________________________________________
Does the premise provide coherent support? ___________________ Why?
______________________________________________
Are these premises reasonably accepted? __________________________ Why?
__________________________
What I learned throughout this entire assessment process:
1. That most other departments keep their critical thinking courses fairly small
(around 35-40 students) and that we are one of the only departments with critical
thinking courses with 65 plus students.
2. Adding blanks to the assessment for the students to fill in might make it easier for
them to answer every part of the question. This would at least help us decipher
between a student who did not know the answer and a student who just missed
seeing that part of the question.
3. I think the results would be different if I was teaching the course and
administering the assessment to my own students. They would have been
introduced to this theory before they answered the question, and I would know if
this information had been covered.
4. The students continue to base their answers off of their own personal experiences
with race even after they have been taught this information in the classroom.
5. Time might have also been a factor since I was only given about 10-15 minutes to
administer this assessment.
6. The assessment tool we created does measure the SLO’s ascribed for the critical
thinking courses. But, how do you get the students to see past their own personal
experiences to see the true validity and soundness of an argument?
Download