UQ Case studies on pilot testing of policy changes

advertisement
UQ Case studies on pilot testing of policy changes
Recommendation:
Pilot-testing proposed policy statements and changes, with realistic timelines for
evaluating impact, provides an evidence base for implementation of policy change
Casestudy 1
Program level assessment
One project tackled the issue of over-assessment and the challenge of modularised (some
would say “silo-ed”) courses to see how we might encourage staff and students to integrate
their teaching, learning and assessment across courses. In some very structured
programs, such as nursing and midwifery, the clinical practice portfolio is a capstone
assessment used at the program level to allow students to demonstrate that they have met
the competency standards for practice. We wondered whether there were further
opportunities for genuine integrated, program level assessment in less structured
programs.
Drs Greg Skilleter and Louise Kuchel from the School of Biological Sciences were
responsible for a pilot study pilot on integrated, program level assessment in the Marine
Studies program, a pilot that was conducted in Semester 1, 2009.
The proposal was to provide integrated assessment that spanned two second-year
courses, BIOL2010 (Ecology) and MARS2014 (Marine Science). Assessment items in the
pilot drew on material presented in both courses, but asked students to apply that
knowledge to a single problem. A Problem-based Learning (PBL) Scenario was developed
that was used as the basis for providing marine studies students with realistic experience in
problem-solving, teamwork and communication. The problem centred on a hypothetical oil
spill in Moreton Bay near Brisbane. (Interestingly while the students were engaged in the
scenario, there coincided a major oil spill in the Bay, followed by many jokes at the
coordinators’ expense about the lengths to which they had gone to make the experience
for their students authentic.)
There were four pieces of progressive assessment which involved working in small groups,
with all items assessed individually for each student. These contributed 50 per cent
towards the students’ final grade in each course. Further, there were several components
to each of these items of assessment, ensuring opportunity to provide students with
progressive feedback on their draft documents and allowing them to incorporate this
feedback into the final submitted document.
The students themselves were inspired by the opportunity to make connections across
courses although some procedural difficulties were encountered. As one of the
coordinators reflected
…one of the problems with trying to develop any sort of program level assessment in the
Science Faculty (not sure about elsewhere though), is that such activities fall through the
cracks .. existing policy is all devoted to course-level activities and this also includes funding
and resourcing. In fact, program level activities have been actively resisted by some in the
Faculty.
The key problem thus identified is that existing policy is focused on course-level activities
and does little to encourage or support assessment that integrates learning across subjects
within a course or program.
Casestudy 2
Fullest and latest
The second pilot was conducted by Professor Sarah Derrington in the Law School,
Underneath the broad heading of “amount and spread of assessment” there are a number of
tensions which continue to come to the surface:
 “fullest and latest” assessment vs. progressive assessment for summative purposes (Royce
Sadler, 2009, 2010, has written and spoken about this most cogently)
 allowing/not allowing students choice in amount and spread of assessment
 allocating marks towards all assessment tasks (possible outcome:- “I only need 25 per cent
on the final to pass”) vs. using assessment tasks for formative purposes only (possible
outcome: “If I don’t get marks I’m not going to put in any effort”).
The Law School felt strongly that a single end of semester written examination, at least in
later years, was the most appropriate way to evaluate students in some core courses. The
School was given permission in 2008 to trial 100 per cent end-of-semester examinations
with some provisos:
 there should be appropriate formative assessment tasks that allowed students to evaluate
their own learning during the semester and to understand the requirements to be met in the
final examination
 it would be preferable if students were given a choice of a 100 per cent final examination or
two summative pieces of assessment especially in first year.
It did not go smoothly by any means with some complaints made to the DVCA by individual
students. Some academic staff gave little attention to the amount, style and content of
formative assessment offered throughout the semester. Where there was a choice of
undertaking two summative assessment tasks vs. one, only between 10 per cent and 30 per
cent of students chose to undertake the first option.
Students enrolled in the courses participating in the study were surveyed at the end of
semester one 2010, after the release of results. Fewer than a quarter of the students
enrolled in the relevant course responded to the survey.
The survey results disclose the following:

When given a choice between an optional assignment and a 100 per cent exam, in both
courses a majority of students chose to do the 100 per cent exam. Note, however, that
the proportion of students who submitted the optional assignment was significantly
greater amongst the students who completed the online survey (LAWS1113 – 49 per
cent; LAWS2111 – 39 per cent) than overall in each course (LAWS1113 – 10 per cent;
LAWS2111 – 28 per cent).

Of those students who chose to do the optional assignment, most reported that they did
so in order to avoid the pressure of a 100 per cent exam (LAWS1113 – 43 per cent;
LAWS2111 – 56 per cent) or because they believe that they perform better on
assignments than on exams (LAWS1113 – 27 per cent; LAWS2111 – 29 per cent).

Of those students who chose to do a 100 per cent exam, many would have preferred to
do the optional assignment but did not have enough time (LAWS1113 – 31 per cent;
LAWS2111 – 31 per cent), and most others reported that they chose to do the 100 per
cent exam because they thought they would do better overall by doing so (LAWS1113 –
28 per cent; LAWS2111 – 21 per cent), or because they wished to distribute their
workload to better fit with other courses and/or paid employment (LAWS1113 – 23 per
cent; LAWS2111 – 26 per cent).
 Students in both courses were provided with a wide range of personal and group formative
feedback options, although many students chose not to seek that feedback.
 Opinions about the effectiveness and usefulness of personal and group formative feedback
were varied. The results indicate that students do not understand the meaning of ‘feedback’
and that there is a need to reconcile academic and student understanding of the concept.
The survey results also indicated that students do not understand the extent of the learning
resources that are available to them. For example, 44 per cent and 42 per cent respectively
indicated that there was no opportunity for them to submit summative assessment and
receive feedback in both courses which had a 30 per cent optional assignment.
In the most recent and possibly final round of the pilot, Professor Derrington developed a
very comprehensive formative assessment model for use throughout the tutorial program.
Students were informed that tutorials in this course had a dual role. The first was to provide
students with an interactive forum within which to explore in more detail and in greater depth
the issues introduced in lectures and to engage with the tutorial staff and other students in
exploring those issues. The second was to introduce students to the manner in which
academics assess student work and evaluate quality, utilising the methodology developed
by Sadler (2010).
Consequently, it was essential for all students who wished to participate in the tutorial
program to produce an item of written work when required. That item could be typed or
handwritten but was to be identifiable only by student number. When a written task was
stipulated for a particular tutorial, any student who did not have a written response with them
at the commencement of the tutorial was not permitted to remain in the tutorial. The written
tasks resembled very closely the types of questions students will encounter on the final
examination paper. Students were advised that they should gradually attempt to replicate
exam conditions as they prepare their written tasks over the course of the semester. In
tutorials written answers, including one written by the tutor, were distributed randomly and
students were asked to assess one another’s work against general criteria and then to
develop their own criteria.
Formal student feedback has not yet been released but anecdotal feedback has been mixed.
Many students could not be bothered preparing for tutorials if there were “no marks”
associated with the exercise and so simply chose not to attend tutorials thereby foregoing
any opportunity of formative feedback. Some of the more academically able students have
reported immense satisfaction with the methodology; many of the less academically able
students continue just to seek a “model answer” which is anathema in law if one is seeking
to develop critical thinking and deep analytical skills.
Lessons learnt include:
 the majority of students in this discipline at least appear reluctant to work progressively
throughout the semester unless it is compulsory and “marks” are attached to any work done
during the semester
 more work needs to be done from the first year around orientation and induction of students
into the university culture of learning and university expectations for students’ responsibilities
in relation to their own learning
 more work needs to be done to determine the academic validity of single summative
assessment items, and the most appropriate types of accompanying formative assessment,
in different disciplinary contexts.
Download