Central Washington University Assessment of Student Learning Department and Program Report

advertisement
Central Washington University
Assessment of Student Learning
Department and Program Report
Academic Year of Report: _2011-12___ College: __Sciences_________
Department __Political Science________ Program: ___Political Science & Gen. Educ.______
1. What student learning outcomes were assessed this year, and why?
In answering this question, please identify the specific student learning outcomes you assessed this year, reasons for
assessing these outcomes, with the outcomes written in clear, measurable terms, and note how the outcomes are
linked to department, college and university mission and goals.
A) General Education Reading Goals, via the “GERG” instructions/Reading Assessment Exercise and
Rubric given to us by the Office of Undergraduate Studies or whoever is currently in Charge of Such
Matters (the General Education Committee? Who knows.) Why? Because our Department foolishly
put down “reading” (or check the “reading” box) on General Education goals, and we were told we
had to do this exercise this year – er, we wanted to know whether our students can read the material
we assign.
B) Political Science Major Learning Objective #4 (on Utilization of Scholarly Research): “Graduating
political science majors will be expected to ‘demonstrate a familiarity with scholarly resources
available to CWU students (e.g., library and Internet resources) and demonstrate how to utilize these
resources in carrying out a research project…’”
-Specifically, we assessed whether Political Science majors use proper citation style, procedure
and format in their research papers. The reason is that we haven’t looked at this particular
aspect in detail yet, and it was a logical addition given the Writing Assessment exercise last year.
2. How were they assessed?*
3. What was learned?*
4. What will the department or program do as a result of that information?*
5. What did the department or program do in response to last year’s assessment
information?
--Last year, we conducted the General Education Writing Goals (GEWG) assessment in conjunction with
other Departments, given we teach two “W” courses in General Education (PoSc 101 and PoSc 270). We compared
writing in a sample of these courses with a sample of research papers in 489, the required Senior Assessment (exit)
course.
--The Department was relatively pleased with our graduating seniors’ improvement in writing over lowerdivision courses, though some improvements could be made and our students are far from universally
accomplished writers. One of these issues was the citation and use of scholarly sources in the lower-division
General Education courses (granted, in the 101 course, this was less-emphasized due to the type of writing
assignments that are assigned, which also vary by instructor), though it came up in 489 as well, though the
assignment in that class by definition is a lengthy research paper. The Department will consider whether to make
writing a greater part of course offerings. We did discuss the possibility of having a junior-level required course on
research and writing, but the difficulty is staffing and how it will fit in with sequencing or progression of students
through the major (it would need to be offered every term, and we have little flexibility in faculty workloads or
demands for content courses now). And many of our students already take elective courses before or during their
junior year, and this would complicate their completion of their degree requirements. It seems more practical to
simply get faculty to coordinate requiring writing assignments, though this hasn’t happened yet.
6. Questions or suggestions concerning Assessment of Student Learning at Central
Washington University:*
*PLEASE NOTE: Given we assessed two different learning outcomes, we have chosen to report
the results in the above categories separately for each, rather than reporting each outcome under each
category, which might be confusing. Please see the following pages.
A) GENERAL EDUCATION READING GOALS (GERG)
2. How were they assessed?
In answering these questions, please concisely describe the specific methods used in assessing student learning.
Please also specify the population assessed, when the assessment took place, and the standard of mastery (criterion)
against which you will compare your assessment results. If appropriate, please list survey or questionnaire response
rate from total population.
A) What methods were used?
B) Who was assessed?
C) When was it assessed?
--Following the protocol set down by the CWU College Reading Assessment, the Department selected
an- agreed-upon approximately 500-word piece of text to use for the reading assessment exercise.
Given the courses/populations to be assessed (see below), this was part of a section on “power” in
international politics from a widely-used, standard introductory political science textbook – in fact, one
used by one of the three current instructors of that course. The Department Chair (after random
selection at a Department meeting Fall term) selected 3 relevant classes – one section of Political
Science 101 (Introduction to Politics), one section of Political Science 270 (International Politics), and
one (the only, that term) of Political Science 489 (Senior Assessment) in Spring Quarter, 2012. The
former two are General Education courses with reading emphasis/outcomes, and the latter, our Senior
Exit Course, again, following the instructions for this assessment exercise. One reason “Spring term”
was chosen was that that particular term typically has the largest enrollment in the 489 course, so it
would be closer to that class/sample size in the general education courses.
The Chair then administered the Reading Assessment Protocol to each of the classes. This consisted of
reading the sample for 1 minute, and measuring how far each student got; and then having students
write a 3-minute summary of the material. (See Appendix C, “CWU College Reading Assessment.”) He
then collected the works from the students.
For 101 and 270, he visited those courses in the last instructional week of the term, when theoretically
students there would have been exposed to some of this kind of material. For 489, it was the first day of
the course. (Students in Senior Assessment are expected to take an end-of-major exam on material in
the core required (sub-field) courses, one of which is 270 – and all had taken a course on international
politics, the subject of the reading sample. Therefore, this procedure had the advantage of measuring
the “what-do-they-know-now” angle, or whether they can adequately read introductory disciplinary
material, without prior review.)
Thus, the populations assessed were: general education (granted, they could also be political science
majors or intended majors) students at the end of their course; and intending-on-graduating senior
political science majors at the beginning of their capstone course. It was during Spring, 2012, and the
method was the one delineated in the CWU College Reading Assessment Protocol and Scoring
document.
3. What was learned?
In answering this question, please report results in specific qualitative or quantitative terms, with the results linked to
the outcomes you assessed, and compared to the standard of mastery (criterion) you noted above. Please also include
a concise interpretation or analysis of the results.
A) Reading Rate/Speed: The results reveal that the bulk of the assessed/sampled students do read at or
above the benchmark rate of 190 words-per-minute (wpm). Perhaps not surprisingly, those in the
introductory course perform the weakest on this measure – presumably because many of them are
lower-level, less-experienced students themselves. This is perhaps notable since the reading
assignment came out of a text aimed at their level. We don’t know whether 60% is good or not, but
we would hope that more students would read at a reasonable rate of speed (though again,
comprehension is more important), though the average is well above the minimum “pass” level (this
could be due to a number of proficient readers). The results, however, show a straight correlation
between course level (and presumably, student-class level) and reading rate, and increases to an
acceptable level. Furthermore, our seniors show stronger mastery of this measure. Again, we don’t
know what to make of this data, however, without larger context.
SPEED:
PoSc 101
PoSc 270
PoSc 489
Pass
N/%
11/61.1
27/81.8
25/86.2
Fail
N/%
7/38.9
6/18.2
4/13.8
Avg Wp-m
249.39
274.03
301.83
pass = 190+ w-p-m
B) Reading Comprehension: Here, the results are a bit more complicated. Interestingly, the students in
PoSc 101 actually perform better, and close to the Seniors in 489, on use of disciplinary vocabulary
and understanding the author’s intent (base information comprehension) in their summaries than the
students in PoSc 270, despite the reading sample being about power in international relations. This
may be due to when the 101 students encountered that information versus those in 270, or perhaps
those in the 200-level class have a case of speed not meeting up with comprehension. The students
in 270, however, give more details, perhaps because the examples and information provided were
more familiar to them. The about-to-graduate seniors, even without exposure to said information in
their classes for a quarter if not years, did better than the others. The one exception is providing
details, which the 270 students do slightly better than the seniors at, perhaps because again of the
“freshness” and relevance of the material (being on international relations) for those students.
However, given the instructions for the reading summary/comprehension portion were vague (‘write
a summary in 3 minutes’), and depend in part upon the reading rate, this might be an accident of the
methodological design or the sample itself.
Pass
N/%
Fail
N/%
PoSc 101
Details
Vocab
Intent
13/68.4
14/73.7
14/73.7
6/31.6
5/26.3
5/26.3
PoSc 270
Details
Vocab
Intent
23/74.2
21/67.7
21/67.7
8/25.8
10/32.3
10/32.3
PoSc 489
Details
Vocab
Intent
21/72.4
23/79.3
26/89.7
8/27.6
6/20.7
3/10.3
C) Natives-v.-Transfers in 489. Another issue concerns comparison between our “native” (i.e., homegrown, all four-year) majors versus students who transferred here and presumably had general
education somewhere else (though granted, to be accurate we’d have to disaggregate this and
examine whether they took our lower-division general education here). We only have data (and it
only makes sense, given the other two are gen-ed courses) for the seniors in 489 for comparison.
-The results show that on all measures, the “native” students perform higher than do transfers in our
sample. Granted, this is just one senior-exit course section sample in one year, and some of the
measures are likely not that significantly greater. And all of them perform at or better than the
sample of students in the 200-level course. Nevertheless, it does appear that our department (and/or
university) does a better “job” in encouraging reading among students than do those that come from
elsewhere, though we should be cautious at drawing too much from this conclusion.
SPEED:
"Natives"
Pass N/%
11/91.7
Fail N/%
1/8.3
Transfers
14/82.4
3/17.6
Avg
W-p-m
327.7
283.6
COMPREHENSION:
PoSc 489
Details
Vocab
Intent
"Natives"
Pass N/%
9/75.0
10/83.3
11/91.7
"Natives"
Fail N/%
3/25.0
2/16.7
1/08.3
Transfers
Pass N/%
12/70.6
13/76.5
15/88.2
Transfers
Fail N/%
5/29.4
4/23.5
2/11.8
4. What will the department or program do as a result of that information?
In answering this question, please note specific changes to your program as they affect student learning, and as they
are related to results from the assessment process. If no changes are planned, please describe why no changes are
needed. In addition, how will the department report the results and changes to internal and external constituents
(e.g., advisory groups, newsletters, forums, etc.).
--The results will be shared with the rest of the department. Since we don’t specifically teach reading (indeed,
last time we checked, that was something students should’ve received in K-6), remediation is problematic. It does
appear that our students read fast enough, but may not be grasping details as much as they should even if they for
the most part do appear to comprehend the concepts. Then again, the method didn’t ask that they provide details
in their summary (or disciplinary vocabulary for that matter). And, furthermore, the graduating seniors do seem
to grasp disciplinary material (i.e., appear to improve over time or over general education students), so that is
positive. Likewise, our native students appear to read more efficiently and effectively, though it is hard to say
what we should do about transfer students (offer them more help? Ask them if they “get” the material?). Not to
give the bureaucratic knee-jerk response, but this also seems “not our job.”
--So, we may want to reflect on the nature and kinds of reading assignments we require, particularly in the core
courses. Since this was a one-shot, less-than-ideal research design, it is difficult to know what to conclude.
6. Questions or suggestions concerning Assessment of Student Learning at Central
Washington University:
In discussing this (Reading Assessment) exercise in the Department meeting dedicated to this topic (in Fall, 2011),
prior to the conduct of this assessment, Political Science Department faculty in one form or another raised the
following points:
1. We do not see how this exercise effectively relates to student reading “success,” especially in the real
world (see 3 & 4) below. Indeed, the bigger problem is likely that students do not read (i.e., do their homework),
not that they cannot read, though admittedly we shouldn’t assume this is the case.
2. We question whether speed is a useful measure, and don’t know about the standard, though we do
see that as (likely?) “expert” readers ourselves, we are much faster than the baseline used here and indeed, our
students on average.
3. The mandated assessment exercise (granted, we understand the need for uniformity) has some
structural and administrative problems that limit both its internal and external validity, in methodology lingo.
4. Even if we do see relevant shortcomings in our students’ reading, despite the points above, it is not
clear what we do about them. Again, we don’t (and won’t) teach reading. (Though overall, they appear OK.)
We realize we agreed to have “reading” as a goal, but we think that was because we believed that effective
reading was a prerequisite to success in any college course.
5. The Chair (or Dept. Assessment Guru) should be given more release time for doing these types of
exercises, and should be included in Workload planning since this is on top of major-level assessment.
A) Political Science Major Scholarly Research Materials Outcome
2. How were they assessed?
In answering these questions, please concisely describe the specific methods used in assessing student learning.
Please also specify the population assessed, when the assessment took place, and the standard of mastery (criterion)
against which you will compare your assessment results. If appropriate, please list survey or questionnaire response
rate from total population.
A) What methods were used?
B) Who was assessed?
C) When was it assessed?
The Department Chair examined student research papers and prof. evaluations of them
(rubrics/grades/comments) from 3 consecutive quarters of Political Science 489, Senior Assessment –
Spring, 2011; Fall, 2011; and Winter, 2012 – for degree of proper citation and reference style in using
scholarly resources. In particular, he looked for whether students used the minimum number (roughly
1/pg. or 10) of sources, and also, if students lost points/were penalized for not properly citing. This sort
of varied by degree of harshness/technicality of the instructor, but whether students did proper citation
format or were not close enough that they received a penalty. “Egregious” examples (no citations, even
if with a bibliography; plagiarism, etc.) were also noted.
Thus:
a) What? Content Analysis / qualitative review of student papers for sourcing and references;
b) Who? Senior political science majors and their research papers in their “exit” assessment course
c) When? Spring, 2011-Winter, 2012 (minus summer)
3. What was learned?
In answering this question, please report results in specific qualitative or quantitative terms, with the results linked to
the outcomes you assessed, and compared to the standard of mastery (criterion) you noted above. Please also include
a concise interpretation or analysis of the results.
-For the most part, students do know that they must cite materials used as evidence, etc. in their research papers.
However, they do not always follow the technically correct format (for example, in a parenthetical cite they may
just give an author’s name and a page number, but not the year, as required, etc.). They are better with
bibliographies, but here too they sometimes have errors in style, do not put them in alphabetical order, etc. At
least, in the sample, there were only a few cases of “egregious” errors, such as not having any citations/footnotes
at all (though the person did provide a bibliography), or not providing a complete bibliography at the end. More
students had issues with the minimum number of sources, etc. Still, overall errors were not widespread, and were
mostly minor. See the quantitative results below.
Major Writing Assessment/Proper Citation:
# Wrong
# Right
Total
Lack of Min.
Sources
Egregious
Spring, 2011
8
16
24
3
2
Fall, 2011
1
1
2
0
1
Winter,2012
3
8
11
1
0
Total:
12
25
37
4
3
32%
68%
11%
8%
Percent:
4. What will the department or program do as a result of that information?
In answering this question, please note specific changes to your program as they affect student learning, and as they
are related to results from the assessment process. If no changes are planned, please describe why no changes are
needed. In addition, how will the department report the results and changes to internal and external constituents
(e.g., advisory groups, newsletters, forums, etc.).
-- Probably have discussions about whether student citation in research papers, etc., is generally
problematic and if it needs to be improved. One issue is that there is no one accepted style (e.g., MLA,
APA, endnotes, etc.) within our discipline, as different journals and different subfields all have their own
practice. Years ago, we tentatively agreed to use one standard citation style – that of the flagship journal
of the American Political Science Association -- and for the most part have done that in the 489 class
(though notably, students were allowed to use any style in the Winter section), but it is also clear that
the department is not necessarily adhering to that in practice. Whether doing that would improve the
results here is debatable, which are in any case not so bad. Also, whether knowing how to cite correctly
in terms of format/style, versus at least knowing that one has to provide evidence even if the format
may be technically incorrect, is another question. Perhaps more involvement with the Writing Center.
Again, we believe students should be able to do this by their senior year, but it is not clear whether we
are responsible for teaching them it.
6. Questions or suggestions concerning Assessment of Student Learning at Central
Washington University:
--Respectfully Submitted,
Todd M. Schaefer
Dr. Todd M. Schaefer
Professor and Chair
December 13, 2012
Download