3 - Northumbria University

advertisement
Report of an investigation into assessment practice
at Northumbria University
November 2006
Joanne Smailes
Graeme Arnott
Chris Hall
Pat Gannon-Leary
1
1. Introduction ......................................................................................................................... 4
2. Methodology ....................................................................................................................... 4
2.1. Quantitative analysis of module descriptors ................................................................... 4
2.2. Utilising the module descriptors...................................................................................... 6
2.3. Qualitative research - Focus groups and interviews ....................................................... 7
3. Findings ............................................................................................................................... 8
3.1. Methods of Assessment ................................................................................................. 8
3.2. Amount of Assessment ................................................................................................ 11
3.3. Distribution of Assessment ........................................................................................... 12
3.3.1 Assessment timing across Schools ............................................................................ 13
3.3.2 Focus group perceptions ............................................................................................ 18
3.4. Frequency of assessment ............................................................................................ 20
3.5. Assessment Criteria and Learning Outcomes .............................................................. 22
3.6. Formative Activities ...................................................................................................... 23
3.7 Feedback ...................................................................................................................... 25
3.7.1 Feedback practice ...................................................................................................... 26
3.7.2. Providing distinguishing feedback ............................................................................. 28
3.7.3. Improving the value of feedback................................................................................ 29
3.8 Student involvement in assessment .............................................................................. 30
4. Conclusion and Recommendations ................................................................................ 33
4.1. Module Descriptors ...................................................................................................... 33
4.2. Methods of Assessment ............................................................................................... 34
4.3. Amount and Frequency of Assessment ........................................................................ 34
4.4. Understanding of Assessment Requirements and Feedback ....................................... 35
4.5. Student Involvement in Assessment ............................................................................ 38
4.6. Point for further consideration ...................................................................................... 39
Bibliography .......................................................................................................................... 40
Appendix 1: Suggested Format for Module Descriptor ..................................................... 44
Appendix 2: Schedule of Interview / Focus Group Questions for Students .................... 50
Appendix 3: Schedule of Interview / Focus Group Questions for Staff ........................... 51
2
School Acronyms used throughout the report
AS
BE
CEIS
DES
HCES
LAW
NBS
SASS
PSS
School of Applied Science
School of Built Environment
School of Computing, Engineering & Information
Sciences
School of Design
School of Health, Community & Education Studies
School of Law
Northumbria Business School
School of Arts and Social Sciences
School Psychology and Sports Sciences
3
1. Introduction
Assessment, particularly its role in the process of learning, is a subject of continual
contemplation across all educational sectors. One of the objectives in Northumbria’s
Learning and Teaching Strategy (2003 – 2006) was a review of the nature and pattern of
assessment following the restructure of the University’s credit framework to a recommended
shift toward 20 credit modules and/or a minimum of a 20,20,10,10 module structure in a
semester.
In addition - although unrelated - at the time of preliminary investigations and construction of
this report results from the first two National Student Surveys (2005 and 2006) had been
published. The survey included 22 statements each rated by students on a scale of 1-5
(Definitely disagree – Definitely agree). In the section ‘Assessment and Feedback’, the score
received by Northumbria was 3.4 and 3.6. respectively. Although this indicates student
satisfaction, the score is lower than expected
Northumbria University has been recognised for its excellence in assessment practice by the
HEFCE through the establishment of a Centre for Excellence in Teaching and Learning
(CETL) in Assessment for Learning (AfL). The CETL directors (Professors Liz McDowell and
Kay Sambell) the Impact of Assessment project, the last significant review of practice within
the institution, completed 1994-6. Twelve years on their findings have offered a very useful
benchmark for comparison purposes and this review, whilst providing a current overview of
the nature and pattern of assessment and drawing upon some of the examples of CETL
practice, will also seek to highlight potential improvements in efficiencies and effectiveness.
2. Methodology
In the context of such developments, this research project was designed around the
triangulation of data collection methods in order to develop a detailed picture of assessment
practice, and perceptions thereof, at Northumbria University. Data collection methods,
supported by literature review, included a review of assessment as recorded in module
descriptor information, as well as student focus groups, academic staff focus groups, and
individual interviews with staff.
2.1. Quantitative analysis of module descriptors
Initially, a quantitative paper-based investigation of assessment type and timing across all
Schools was undertaken. Using University information systems a total of the number of
active programmes across all Schools was determined. A stratified sample of programmes
was drawn based on School size and ensuring both undergraduate and postgraduate
programmes were included (based on square root of the total number of programmes within a
School). In total, forty-three programmes across ten1 Schools were selected. The core i.e.
compulsory modules from each of these programmes, a total of 604, formed the sample from
which assessment data was extracted.
Module descriptors feed the quality assurance cycle with data against which modules can be
approved for delivery, and subsequently monitored in external examination procedures, and
1
Subsequently, following the creation of the School of Computing, Engineering and Information Sciences sample
programmes from the two separate Schools were merged.
4
by internal and external quality assurance audits. Commonly they record the purpose and
scope of a module, its intended learning outcomes, and the means by which this will be
achieved i.e. the syllabus, learning and teaching strategy, student workload, assessment
types and timing.
The important element of the descriptor, for the purposes of this research project, was the
Module Summative Assessment statement which indicates all pieces of assessment
associated with a module, their types, weighting, and submission points in the academic year
(Figure 1).
Fig. 1: Typical Module Summative Assessment information from a Northumbria University
module descriptor
For each programme included in the sample, module descriptors for each core module were
obtained and the Module Summative Assessment information coded or quantified for
incorporation into an Excel spreadsheet for analysis.
The spreadsheet recorded for each module:

Module code

The semester basis of the module (semester1, semester 2 or Year Long)

The number of assignments associated with the module

Assignment type (categorised according to Figure 2)

Submission points for the assignments
5
Fig. 2: Typical Module Summative Assessment information from a Northumbria University
module descriptor
A
Written assignments,
coursework etc.
incorporating
essays,
reports,
articles,
B
Practicals and Labs, incorporating practical tests, lab work, lab reports
logbooks and workbooks
C
Portfolios, includes professional, research and practice portfolios
D
Subject Specific Skill Exercises, for example, digital production, model
making
E
Exams, i.e. formal exams, class tests, seen exams, phase tests, MCQs
F
Subject Specific Product Creation products such as a garment or
fashion product, a programme, a video
G
Projects and Dissertations
H
Oral Communication based assessment incorporating presentations,
vivas, speeches, debates, poster presentations
U
Unspecified:
2.2. Utilising the module descriptors
The task of utilising information contained in the descriptors was complicated in a variety of
ways. Most significantly, there was great variation in the terminology used by authors to
describe assessment type.
On some occasions additional clarification of what was intended by a description in the
Module Summative Assessment statement was possible from information contained in the
Learning and Teaching Strategy statements. However, this was not consistently available or
the relevant sections completed.
In order to avoid investigator bias or error, descriptions received the minimum of
interpretation. This however resulted in a large number of assessment descriptions ranging
from those that were quite precise such as ‘specification of a product design’ through the
more common forms such as ‘open-book examination, or ‘critical essay’, to the vague, for
example ‘written assignment’. Consequently, for analysis purposes the range of assessment
types was condensed into a more concise range that could accommodate these extremes of
detail. These are listed in Figure 2 above,
6
2.3. Qualitative research - Focus groups and interviews
Themes for discussion in focus groups were identified from a literature review, analysis of the
data gathered in the paper-based exercise, and concerns emerging from National Student
Survey. Two principal stakeholder groups were selected with whom to discuss these themes,
namely students and academic staff.
The original intention was for members of the student focus groups to originate from the
programmes sampled in the paper-based review. Programme leaders in the University were
emailed, with a request for student representatives who might be approached to take part in
the research. Although there was a good response from the programme leaders, follow-up
messages to students produced a very poor response, so alternative methods of procuring a
sample had to be found.
The Students’ Union was contacted, and asked to email programme representatives
requesting their co-operation, and stressing that this was an opportunity for them to make
their voices heard and influence the assessment process. An incentive of a £10 shopping
voucher was also provided. This had a better yield than the original method, but the response
rate was still disappointing. Two student focus groups were arranged in the Students’ Union
Training Room2. Attendance figures were again low and three students were approached in
the Students’ Union building on the day and agreed to take part.
A total of 15 students attended the two focus group sessions (one of 9, one of 6 students, 10
were female, 5 male, and all were undergraduates, ranging from 1 st to 4th years.
Although students from all Schools were invited to take part, programmes ultimately
represented were from the subject areas of Drama, Law Exempting Degree, Business and
HR Management, History of Modern Art, Design and Film, Business and Marketing,
Psychology, Business Information Systems, Law, Geography and Marketing therefore
representing eight of the nine Schools. The majority of the student participants were
programme representatives and therefore were accustomed to attending meetings, and to
presenting the views of others, as well as their own.
As focus group numbers were lower than expected, data from student focus groups from a
complementary research project focussing specifically upon feedback has also been included
where relevant. Students who took part in these focus groups came from the following
Schools: AS, BE, CEIS, DES, LAW, NBS and SASS
Staff response for focus group participation was also low. One focus group was held but only
a small number of Schools was represented. Therefore this was supplemented by a series of
telephone interviews, to improve cross-university representation.
All focus groups meetings were tape-recorded and supported by notes made by a nonparticipant observer.
2
Please note there was too low a response from Coach Lane Campus students to justify a focus group to be held there.
Students were also unable to attend any the dates offered at City Campus.
7
3. Findings
An important aspect of contemporary educational practice is the stress placed upon
assessment not only as an exercise in acknowledging achievement but also as one of the
principal means of supporting learning.
Gibbs and Simpson (2004/5) reinforce this approach in their article reviewing assessment
arrangements, for which they claim ‘it is not about measurement at all – it is about learning.’
This represents the fundamental shift in assessment practice that has begun to take place in
higher education in the last decade.
It is evident that careful assessment design is necessary to ensure that it supports learning
and so that student effort is directed and managed. The number and nature of assessment
tasks, their frequency or infrequency, their scheduling and explanation, are all significant
factors to consider, as they can individually and collectively influence the effectiveness of
assessment. Similarly, carefully constructed approaches to providing feedback in a relevant,
timely and meaningful fashion are vital in supporting learning.
These factors feature in the findings of this survey of assessment practice at Northumbria
University and will be discussed in terms of the assessment methods, assessment load and
distribution of student learning effort, formative activity, feedback practice and the involvement
of students in the assessment process.
3.1. Methods of Assessment
A wide variety of methods is used to assess learning at Northumbria University. The review of
the types of assessment recorded in 604 module descriptors identified 77 different ways in
which assessment was described. For analysis purposes these have been condensed into
nine categories (See section 2.1), Figure 3 illustrates each assessment method as a
proportion of all assessment recorded within the sample.
Fig. 3: Proportional representation of all Assessment Methods used (n=1009).
Written assignments
37%
Practicals and Labs.
7%
Portfolios
10%
Subject specific skill exercises
4%
Exams
22%
Subject specific product creation
2%
Projects and dissertations
9%
Communication
8%
Unspecified
1%
8
The most utilised assessment methods represented were written assignments (37% of all
assessment) and examinations (22%) although there were significant differences in their
employment across the Schools as illustrated in Fig 4
Fig. 4: Assignment types by School
Written
School Assignment Exams Practical Portfolio
Sub
Spec
Skill
Product
Design
Comm
Project
AS
21.6%
29.1%
20.3%
9.5%
2.0%
1.4%
8.8%
7.4%
BE
61.7%
14.3%
2.5%
4.2%
4.2%
4.2%
9.2%
CEIS
24.4%
35.0%
23.9%
2.4%
5.7%
6.5%
4.1%
0.8%
DES
29.0%
1.4%
4.3%
14.5%
4.3%
13.0%
33.3%
HCES
44.9%
3.4%
29.3%
3.4%
3.4%
3.4%
9.5%
LAW
68.8%
3.1%
9.4%
NBS
31.1%
46.4%
4.6%
3.3%
SASS
38.2%
18.3%
0.5%
9.1%
19.4%
12.9%
PSS
36.4%
21.2%
12.1%
6.1%
6.1%
9.1%
2.7%
7.3%
5.3%
6.1%
1.3%
3.0%
Four Schools in the survey employed all of the methods. However, there is a tendency for
Schools to rely, quite markedly, upon a particular or small range, of assessment methods.
For example BE heavily utilises written assignments (61.7% of assessment) as does LAW
(68.8%). DES and PSS, on the other hand, rely mostly upon a duet of assessment methods.
DES places emphasis upon written assignments (29%) and Projects (33.3%) whilst the
approach in PSS is written assignments (36.4%) and examinations (21.2%). Offered as an
issue for further possible debate is to consider why such differences occur. Is this due to the
nature of the discipline?
In two Schools examinations account for more that one third of the total assessment methods
used. However, this is not necessarily reflected across all programmes within these Schools.
For example, within one programme in NBS examinations form 65% of the assessment
across its core modules whilst another programme has no form of examination.
It is interesting to note students’ reaction to the use of examinations.
“You could go to every single lecture, and still not do well in the exam.”
“Our teacher gave us pointers to use, saying ‘this will come up and that will come up,
so study this’, and it didn’t come up. So students were all complaining about what
happened in the exam… if she’s going to be giving us tips, they should be the right
tips, not misleading us.”
“On our course, a lot of people were questioning how some lecturers gave out essay
questions that were going to be in the exam, and other lecturers didn’t even give topics
to revise …so it would be good if there was some sort of criteria as to how much help
9
you would be given, as so many people spent time working on stuff that was just not
related to their exam.”
These views are not that dissimilar to those expressed by students in the Impact of
Assessment project (McDowell and Sambell, 1999), a decade earlier, indicating some
consistent and longstanding views:
“Exams here are so pointless, .the questions are so precise. You are never going to
need that kind of useless information”
“If you have come from a working background, exams are not a lot of use to
you…They suit people who have come from school, who have been taught to do
exams and nothing else. I’m out of that routine.”
Although strong feelings were expressed students were not necessarily anti-examination:
concerns were more to do with the weighting placed upon certain assessment methods.
Students expressed a preference for assessments that are not based completely on
examinations. In the feedback focus group SASS students cited friends studying in AS who
had many small assignments in contrast to their:
“one 3,000 word essay …worth 100%. If you don’t do well on that you are screwed.”
They also commented that:
“You need something like that happening all the way through so you have got into the
work”
“If you have four phased miniature exams worth 5% each, at least you get the chance
to break up your feedback.”
Within focus groups, students generally described their experience of assessment as
encompassing a number of methods including presentations, workshops, essays, lab reports,
examinations, maps, and supervised professional practice. Most felt that the range of
methods used was reasonable, although those who were assessed entirely by examination
felt that they would prefer to have a mixture of assessment types.
Staff responses highlighted yet more types of assessment, including an all-day project that
started in the morning and finished in the evening, and a week-long project that was issued
on a Tuesday morning, and completed by Friday afternoon (assessed by presentation to
professionals in the field).
It is clear that, within Northumbria, a wide range of assessment methods is employed and this
in general is appreciated by the students. During the analysis process assessment was
described in over 70 different ways but it was clear these in essence fell into eight main
categories. In this regard an issue for further debate is offered for consideration: Should
module tutors be restricted to using a more limited range of assessment descriptions?
10
Recommendation:
Within pedagogic literature, examinations are well known to be anxiety associated
assessment methods which in turn are linked with the least desirable (surface) approach to
learning (Ramsden, 2003). It is recognised that examinations are a popular requirement in
professional exemptions. However, for programmes where examinations make up more than
40% of the overall assessment it is recommended that a clear rationale is made for this
choice. Additionally, programmes should be encouraged to ensure that there continues to be
a good mix of assessment methods employed and that the use of these is spread across a
module/programme’s duration
3.2. Amount of Assessment
The design of assessment has a significant influence upon students’ approaches to it, and
also the time and effort that they devote to the tasks (Brown, 1997; Struyven et. al. 2002).
Student effort might be affected by various elements of assessment design that influence
when a student will study i.e. the number and frequency of assessment tasks, hand in dates
and the frequency of assessment, and how a student will study i.e. the relevance of
assessment and their understanding of the requirements, as Gibbs and Simpson describe in
their discussion of conditions that support good assessment (2004/5).
It is important that students are not overburdened by disproportionate amounts of assessment
in relation to module size. At Northumbria, this is taken into account through the Notional
Student Workload, where there is an expectation that summative assessment activity should
be no more than 20% of the workload hours. Additionally, within the Guidelines for Good
Assessment Practice there is a recommendation that, for a 10 credit module, there should be
a maximum of two pieces of summative assessment. However, as recognised by much of the
CETL: AfL activity it is not always possible to separate assessment from learning and
teaching and in fact enforcing a distinction between the two is potentially damaging in
reinforcing assessment purely as a measurement.
The survey results suggest that in fact around one piece of summative assessment is the
norm and can often be the case with modules of larger size. However, as indicated by the
comments in Section 3.1 (p10) this is not necessarily what students want. Figure 5 illustrates
that in all Schools, the average amount of assessment for modules of 10 credit size, is greater
than one (ranging from 1.03 to 1.93) and for 20 credit modules, where an average of two
pieces of assessment would be expected, the range is wider at 1.0 to 2.52. From the table it
can be seen that the School of Law have on average the lowest number of summative pieces
and Applied Sciences the highest.
11
Fig. 5: Average Number of Summative Tasks on Modules
Average no of pieces
of assessment per 10
credit module (n=264)
Average no of pieces
of assessment per 20
credit module (n = 273)
AS
1.93
2.52
BE
1.57
2.19
CEIS
1.61
2.25
DES
1.46
1.31
HCES
1.03
1.51
LAW
1.06
1.00
NBS
1.09
1.75
SASS
1.47
2.13
PSS
1.58
2.20
School
Within the survey sample a number of additional modules sizes did occur e.g. 15, 30. These
occurred in low numbers and in line with standard module sizes similar module averages
occurred. In addition a number of 60 credit modules were included in the sample and, as
might be expected, these were dissertation based modules with one associated piece of
assessment.
3.3. Distribution of Assessment
Gibbs and Simpson (2004/05) note how the distribution of student learning effort is a crucial
factor in ensuring that assessment supports learning. Analysis of module descriptor
information reveals that, at Northumbria University, the overwhelming trend is for modules
(whether semester based or year long) to be summatively assessed towards the end of their
duration.
This suggests that, although students are afforded a long period over which to be ‘on task’ (a
potentially positive effect), where a large proportion of students’ total assessment is loaded at
the end of semesters, the effect can be negative instead, by placing unintentional stress on
the students and, potentially, resulting in the adoption of surface approaches to learning.
(Gibbs 1992).
The pattern of assessment hand in dates for the entire sample of 604 modules shows a
sudden peak in submissions for assessment at the end of semester 1 (weeks 12-15) and
semester 2 (weeks 27-30).
12
Considering modules by their length, confirms this trend regardless of whether they are
semester based or Year Long (Figs. 6 and 7).
Fig. 6: Distribution of Assignments on Semester Based Modules
Percentage
30%
20%
Sem 1
Sem 2
10%
0%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Teaching week umber
Fig. 7: Distribution of Assignments on Year Long Modules
Percentage
30%
20%
10%
0%
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Teaching week number
3.3.1 Assessment timing across Schools
This pattern is experienced across all Schools although there is some variation in the extent
of end loading. Figure 8 illustrates the proportion of assessment in the sample falling in the
two peak periods, by School. DES, AS and PSS place less reliance upon these periods in
contrast to other Schools, with NBS and HCES scheduling the highest proportions of
assessment in these periods. Some 77% of assessment in the NBS sample was due within
the two three week periods, whereas only 33% of assessment was scheduled for this period
in PSS.
13
Fig. 8: Proportion of Assessment falling at end of the semesters
semester 1
week 12-15
semester 2
week 27-30
Total
AS
23%
24%
47%
BE
36%
28%
64%
CEIS
29%
36%
65%
DES
26%
28%
54%
HCES
41%
31%
72%
LAW
43%
25%
68%
NBS
48%
29%
77%
SASS
32%
28%
60%
PSS
18%
15%
33%
School
This variation across year long and semester based modules across Schools is also
illustrated in Figures 9 -11.
Fig. 9: Distribution of Assessment Timing in Year Long
Modules, by Schools (n=202)
PSS
SASS
NBS
LAW
HCES
DES
CEIS
BE
AS
0%
wk1-5
20%
wk6-10
40%
wk11-15
60%
wk16-20
80%
wk21-25
100%
wk25-30
14
Some Schools, where there is a trend towards end loading module assessment, manage to
modify this in Year Long delivery, for example DES, CEIS and NBS. In the case of LAW
however, the assessment pattern remains much the same regardless.
A member of staff noted in interview that management of assessment workload across
programmes is attempted but that an important factors affecting the success of such efforts, is
the nature of the assessments designed by module tutors:
‘A programme wide view was taken at a recent re-validation but conflicts of
assessments for students on a programme are difficult to avoid given the fact that most
modules have a terminal summative essay or exam.’
Fig. 10: Distribution of Assessment Timing in Semester 1, by
Schools (n=213)
PSS
SASS
NBS
LAW
HCES
DES
CEIS
BE
AS
0%
20%
wk1-3
40%
wk4-6
60%
wk7-9
80%
wk9-12
100%
wk13-15
15
Fig. 11: Distribution of Assessment Timing in Semester 2,
by School (n=181)
PSS
SASS
NBS
LAW
HCES
DES
CEIS
BE
AS
0%
20%
wk1-3
40%
wk4-6
60%
wk7-9
80%
wk9-12
100%
wk13-15
Although the data indicate that there is a definite tendency towards end-loaded assessments,
a small proportion of modules within the sample do have early hand-in dates. Figure 12
illustrates for each School the earliest point at which an assessment is formally scheduled. It
would appear that Year Long modules are just as likely to have assessments set as early as
their semester based counterparts.
Fig. 12: Earliest assessment point by School
Teaching week number
School
Semester 1
Modules
Semester 2
Modules
Year long
Modules
AS
3
1
3
BE
5
4
3
CEIS
5
6
4
DES
3
2
5
HCES
1
4
9
LAW
2
6
5
NBS
15
7
7
SASS
8
1
10
PSS
7
6
3
16
Some exemplars, at programme level, further illustrate the extremes of variation in the pattern
of assessment timing. Programme A illustrates a distributed pattern of assessment that
attempts to spread assessment load across two semesters. Although still prone to some end
loading at the cessation of the academic year, study time across the two semesters is well
utilised to allow students to spend more time ‘on task’. A number of different Assessment
methods were also used across the programme including written assignments, presentations,
and in class tests. (Fig. 13)
Fig. 13: Assessment load in examplar School A
Number of assessments
15
10
5
0
1
6
11
16
21
26
Teaching week number
However, examples of two other programmes illustrate an almost purely end-loaded model
across both semesters. (Figs. 14 & 15)
Fig. 14: Assessment load in examplar School B
Number of assessments
15
10
5
0
1
6
11
16
21
26
Teaching week number
17
Fig. 15: Assessment load in examplar School C
Number of assessments
15
10
5
0
1
6
11
16
21
26
Teaching week number
In both these cases one or two assessment methods tended to be utilised on the majority of
occasions3.
3.3.2 Focus group perceptions
It was recognised that the data captured from module descriptors may not truly reflect what
actually happens, or indeed the actual student experience and/or perceptions. Therefore the
timing and amount of assessment experienced was investigated further with student and staff
focus groups.
Student perceptions agreed with the statistical data that most assessment is clustered
towards the end of modules, with the final three weeks of module duration being most
commonly used as the time for work to be submitted. Exams were also commonly clustered,
and sometimes coincided with other assignment submissions, which proved very stressful at
certain times of the year.
The comments below clearly indicate January and May as
“flashpoints” with some claiming to have 5 or 6 exams at these points.
‘We have loads of exams in January, and more in May, which is really stressful
because we’ve only done one so far, and if we are doing something wrong we might
fail all of them’.
‘Our coursework is not very well spaced out. We have all our exams in May, and lots
of coursework is due in now (i.e. at the end of semester 1)’
Gibbs and Simpson (2004/5) argue that students distribute their time spent on assessment
unevenly across courses, and agree with Brown who claims that, rather than viewing
assessment as an affirmation of learning, students view it as the goal (Brown 1997).
Gibbs (1992: 10) acknowledges that assessment often becomes the raison d’être for students
and that their energies are channelled into strategic preparation for specific exams, and
question selection.
Implicit in Gibbs and Simpson’s recommendations for improving
assessment practice is the fact that such stress can further lead to students learning
3
Please note the programme examples came from three different Schools.
18
selectively and strategically, for example, through exam question spotting and targeted
revision.
The heavy assessment clustering that seems prevalent at Northumbria University is a cause
for concern in terms of the stress it might be causing students, and also in terms of the
negative effect it might have in terms of the learning strategies and approaches that students
have to adopt in order to cope with its demands.
In addition Gibbs and Simpson observe that assessment tasks should be designed in such a
way as to ensure that students fully utilise their study time. They argue that summative
assessment that occurs at the end of modules encourages students to apply effort at those
points but not across the full available study period.
The scenario is familiar i.e. students leave completion of the assignment until late in the
semester even though the details were issued at the beginning, thus cramming their effort in
to a small proportion of the study time actually available.
Staff acknowledge that ‘bunching’ of assignments occurs, but feel constrained by the system
of modularisation. A staff member from HCES, for example, said that professional courses in
nursing tend to run assessments at similar times, although staff try to plan to spread them.
‘It was easier before (modularisation), but now we have to assess at the end of a
module. We have to teach before we can assess.’
A staff member in PSS agreed that:
‘The ‘bottlenecking’ of assessment submissions has become one of the key
issues/complaints in our student feedback’.
However, despite considerable effort, including the formation of a divisional Assessment
Panel, which reviews all assessment across programmes, the staff member felt that:
‘the reality is that if the final assessment on a module fell in week 8, there would be no
attendance afterwards, and teaching time would be lost.’
It was agreed that Year-Long modules give greater flexibility when it comes to the timing of
assignments, and a staff member in CEIS said that it was common practice for module tutors
on Year-Long modules to deliberately avoid setting work at times when semester-Long
modules were being assessed.
Interestingly, in LAW, where modularisation is not practiced, assignments still tend to be
clustered, as a deliberate strategy by staff.
A staff member explained that,
‘yes, assessments tend to be ‘bunched’; but this was deliberate. We tried spreading
them out, but it felt like the students always had something on the horizon. It affected
attendance, as they would miss lectures and seminars (and therefore material they’d
need to pass exams) so that they could work on assignments.’
On a positive note, it was clear from student comments that in some cases, assessment
practice was amended when experience/feedback had indicated previous practice to be
ineffective. For example, students from a programme located in SASS complained that there
19
were five essays due in after the Christmas break. Their concerns were taken on board, and
assignments had subsequently, been spread.
On another programme in CEIS,
‘students were tested in-class 5 weeks into their first year, and everyone failed’.
Again, in reaction to concerns, the schedule was changed.
Students in the focus groups indicated appreciation of efforts to respond to their concerns.
Recommendation:
Evidence overwhelmingly illustrates that there are two clear points when there is a large
summative assessment load placed on students. It is clear that this is directly correlated to
“recommendations” relating to the number of pieces of assignment work per 10 credit module.
It is important that students should not be overburdened by assessment and therefore we
recommend that the proportion of Notional Student Workload (NSW) relating to summative
assessment should remain. However, it is recommended that the advice relating to the
number of pieces of assessment should be withdrawn, whilst care is taken to ensure that the
number of summative pieces of work is not over excessive. It is also recommended that
singular assessment formats e.g. portfolios, essays are adapted to incorporate a number of
stages to allow for more active engagement and formative feedback opportunities. If there
are concerns relating to attendance issues in relation to assessment during the module’s
duration these should be addressed by other means or incorporated in some form into the
assessment process.
3.4. Frequency of assessment
Frequent assessment is also recommended by Gibbs and Simpson (2004/5) as a means of
ensuring that students fully utilise their study time. It has also been demonstrated that subject
areas that have less frequent assessed tasks also have students who study fewer hours (Vos
1991 in Gibbs & Simpson 2004/5).
Whilst expectations may vary between individual members of academic staff at Northumbria
University, students are generally expected to spend 1-2 hours on learning outside the
classroom, for every hour spent in class. It is critical that this out of class learning takes place
as research indicates a strong relationship between student time on task, and their levels of
achievement (Berliner 1984), and Chevins’s (2005) research supports the contention that
frequent assessment results in the occurrence of more learning.
Trotter (2006), reporting on the effect of “continuous” assessment on the behaviour and
learning environment of students studying on a module within an undergraduate degree at a
UK university, concludes that, while continuous assessment may be time-consuming to
administer, the rewards of an enhanced learning environment for students outweigh the
additional burden on staff.
20
Analysis of module descriptor information reveals that, at Northumbria University however,
only a few subject areas make significant use of frequent assessment tasks. It should be
noted at this point that no conclusions are drawn regarding the frequency of purely formative
assessment activities, as this information is not recorded in module descriptors.
Students also described this wide variety across the represented programmes. One student
recalled a module where an essay was set every three weeks, with an examination in January
(the student was happy with this arrangement), whereas another student reported that their
assessment consisted only of examinations (with no way of monitoring progress until the
results were released).
Similarly, comments were made about the fact that, on some programmes, the first
assessment happens in January and, before then, students have no idea of how well they are
doing. They would prefer to have mini assessments earlier in the year.
However, it was acknowledged that hand-in dates at the end of the semester were necessary,
as assignments had to be based on what had been taught.
‘I understand why the work is handed in, in December, as we wouldn’t know how to do
it before then.’
Staff admitted that clustering of assignments took place and that this was inconvenient for
students. It was felt that this was an inevitable result of the modular structure of programmes.
However, staff noted that the most common practice was to give out assignments well in
advance of submission dates, so students had the opportunity to manage their time to avoid
the worst effects of clustering. In the student focus groups students had stated that, no matter
what the assignment arrangements, they would prefer to have details of all assignments for
the semester to be given out at the start.
Decisions about assessment design, type and timing can all seriously affect student learning
styles and behaviour but, to an extent, it seems that the practicalities of semesterisation and
the impetus to keep assessment load down for students has created a situation where
infrequent and end of module/semester assessment is the norm.
There were indications that a couple of Schools apply the principle of small frequent
assessments (e.g. AS – usually related to labs and practical sessions, BE – although to a
lesser extent). It is recognised that small frequent assessment setting may lend itself more
easily to subjects that are technical/practical or skills-based in nature. However, it is critical
that student time on task is maximised as research indicates a strong relationship between
this and student achievement (Berliner 1984).
Recommendation:
It is recommended that, where practical, modules utilise the concept of frequent but small
assessment tasks. Where this is less practical frequency can be introduced by breaking down
an assignment into smaller tasks or waypoints, spread across the learning period.
21
3.5. Assessment Criteria and Learning Outcomes
Brown (1997) explains that students will apply effort to an assessment task in proportion to
their feelings about its relevance i.e. how many marks it carries, how well they think they can
do etc. Such relevance can be indicated by the design of assessment to clearly respond to
learning outcomes. Indeed, at best, it might help to indicate to students the important topics
and aspects of a module.
Race (1994) notes how there can be a mismatch between what lecturers assess and what is
to learnt (albeit effectively communicated). Similarly Rust (2002:147) suggests that although
modules may now be written with a number of articulated learning outcomes ‘the assessment
task or tasks have remained the same and the linkage between the outcomes and the
coursework essay, exam or whatever is tenuous at best, and almost always implicit.’ The
starting point for aligning student learning to learning outcomes is to align assessment, and to
make this explicit at all levels of documentation including the module descriptor.
It is difficult to draw conclusions from module descriptors as to how well assignments align
with learning outcomes as the level of detail necessary to reveal this does not form part of the
descriptor.
However, student responses in focus groups indicated ambiguity, at least in their eyes,
between marking schemes, the assignment, and its relation to the module learning outcomes.
At one end of the scale, one participant felt that students were not made aware of them at all,
and felt that:
‘we were just left to get on with it.’
The same student said that there had been no information about what to revise for
forthcoming examinations, or about what to expect.
“Sometimes you don’t really know what you’re aiming for – you don’t know how to
achieve.”
At the other end of the scale, another participant said that they were given a teaching and
learning plan at the beginning of each year outlining everything that would happen, and
including learning outcomes. A third student said that they were given handouts at the
beginning of each lecture, relating it to the learning outcomes of the programme. Most student
experience, however, was somewhere in-between: a number tended to know the learning
outcomes for their programmes, but not for individual assignments and, commonly, students
were given a list of marking criteria along with the assignments.
It is noted within the professional literature that providing explicit criteria alone does not
produce better performances. Rust (2002) suggests that it is necessary, at the very least, to
spend time discussing the criteria with students. The overwhelming feeling amongst students
was that more instruction should be given about assignments. Comments were made that
students are told ‘It’s on Blackboard – off you go’, and that they were expected to research
the area without guidelines. One participant said that students on their programme were
assessed by portfolio, but that they weren’t clear about what should be in it. They hadn’t
realised that a particular piece of work done early in the year was intended to be included,
and it was missed out. One suggestion made was
22
“I’d like to go through a model answer and my answer with the lecturer. How it should
be set out, how it should progress. How to get marks here and there. A walk through”
Other students reported on perceived good practice:

assignment loaded onto e-learning portal then discussed in a workshop, so that students
had time to absorb what was required of them and come up with any questions they might
need to ask.

lecture given before an essay assignment, which covered the subject background
There is certainly room for improvement in the articulation of criteria and making students
aware of these and helping them to understand them. As deliberate and explicit as academic
staff might imagine them to be, criteria need emphasis and explanation otherwise they can
be, at worst, useless and, at best, confusing and open to misinterpretation by students (Penny
and Grover 1996). However, caution is necessary. Students indicate that some assignment
briefs give too much information while others give too little, suggesting that the solution to the
problem is not simply one of providing more information, but may be more dependent on
providing more effective information.
Recommendation:
There seems to be evidence of miscommunication in relation to assessment and learning
outcomes. In a number of modules it would appear that the use of Assessment Briefs
(particularly those electronically based) can go some way to alleviate these difficulties. It can
be argued that learning a subject also requires learning about standards of criteria therefore it
is legitimate to consider the use of contact time to expand upon in detail the assessment
requirements.
3.6. Formative Activities
Module descriptor analysis suggests that it is not general practice for module tutors to
explicitly outline formative assessment. In many descriptors there was no mention of
formative assessment at all. Some descriptors included a quantification of formative
assessment, but no details. Only on rare occasions was formative assessment explicitly
included within the section dealing with teaching and learning strategy.
However, a member of staff explained in interview that:
‘A glance at the module descriptors and other paperwork would suggest that the
emphasis is towards summative assessment, but this does not reflect the true situation
... ‘
The term ‘formative assessment’ was not immediately recognised by all student participants.
Once it was explained however, it turned out to be widely used in the programmes
represented by students in the focus groups.
In general there were mixed responses on how effectively formative assessment was being
used. For example, students on one programme reported how they had formally asked for
more formative assessment, as the programme was assessed by examination; but only one
module tutor had acted on their request. In a second example students mentioned some use
23
of the quiz features on the e-learning portal for formative activity and this was a popular
format. However, it appeared that the feedback elements were underutilised and students felt
that the current status of being provided with a raw ‘score’ of performance didn’t help and
could be frustrating when people did not understand why their answers were wrong. There is
no doubt that the utilisation of feedback options available through the elearning portal quiz
features could assist students understanding. In a third instance, there appeared to be
evidence of a staff-student mismatch in the perception of the use of formative assessment,
students claimed there was a complete lack of formative assessment, whereas staff from the
same School reported in their focus group discussions that diagnostic work was set
throughout the first year, was handed in, marked, and would be given back in seminars if
students attended.
Discrepancies between the experiences of students where programme numbers differed
considerably were notable - predictably, those on programmes with smaller numbers of
students reported on experiencing more individual attention, examples include:

staff taking in work in progress and give feedback, which other participants felt would be
very helpful.

formal individual tutorials with staff when handing back every essay

computer-based multi choice questions but annotated with written feedback. Students
were then encouraged to try them again, in the light of the feedback.
In all cases participants acknowledged that this was a luxury that was afforded because they
were on programmes with small numbers; and those on bigger programmes accepted that,
whereas they would very much appreciate such measures, they would not be feasible in their
areas. Gibbs and Simpson (2004/5:19) acknowledge that resource pressures and the
decreasing length or modules have affected the ability to provide timely feedback.
However, it is also worth noting that the CETL assessment for learning principles
(http://northumbria.ac.uk/cetl_afl/afl/?view=Standard) can also be utilised for larger cohorts as
exemplified by the following example:
A core first year generic skills module is taught to around a cohort of 360 students where
contact time is limited to five hours. In the past this module was recognised as a “problem”
module which experienced minimal student engagement. Through the application of AfL
principles the module was redesigned to make full use of the virtual learning environment
(VLE) for delivering both curriculum materials and for assessment management. Students
are asked to complete a short draft essay on one of the topics covered (plagiarism) and post
this on the VLE for critical comment by others. Students from each seminar group were then
asked to select the “best” final essay from their group and this shortlist was then judged by a
panel of academic staff and the winner awarded a prize of £50 book tokens donated by the
CETL. Feedback from the students has been very positive and it is recognised that small
changes in the assessment practice have improved the module’s effectiveness and efficiency
substantially.
There was unanimous agreement in the staff focus group that formative assessment and
feedback was a good thing in theory; but staff were equally unanimous in agreement that they
felt very few students would, in practice, do work that didn’t attract a mark.
24
An interviewee noted that:
‘If a task is presented explicitly as a piece of formative assessment then the students
are reluctant to engage with it. If it is more subtly presented (as part of a seminar for
example) then take up is better.’
Another felt strongly that:
‘Formative assessment does not motivate students significantly. For example, if
feedback is given on formative task and then students are given the opportunity to resubmit, this is rarely taken up. Summative assessment does motivate students.’
Staff feelings were generally influenced by personal experience to date, some staff described
their attempts to use formative assessment (e.g. as guidance for students, examination
advice) and that the student response to this had been poor.
This has some resonance with the position of Fritz et. al. (2000) who reports that receiving
feedback is a relatively passive activity and, consequently, if a student repeats an
assessment task they are likely, regardless of the feedback they receive, to repeat what they
did in the first place. The focus then, is much more upon completing the task and Fritz et. al.
point out that it is the emotional and psychological investment in producing the piece of work
to be assessed that has the strongest effect upon a student.
Students unanimously felt that formative assessment and feedback would help the learning
process, and suggestions for how it could be used to help included giving more information
with online questionnaires, having staff reviewing work in progress, and each student having
an allocated tutor for each subject who would be available to discuss this subject with those
who needed help while producing assignments (similar to a dissertation supervisor). This
latter case, as stated in the previous section is already offered in many cases on an informal
basis but students generally do not take advantage of it. This may suggest that a formal
system would be preferable. The students did recognise that this would require a lot of staff
time. It was also suggested that formative feedback should be used to mitigate poor exam
results and to reduce the impact of poor performance caused by nerves, so that if a student
got a poor mark in an exam, the formative feedback could be checked to give a more
accurate picture of that student’s abilities.
Recommendation:
In order to encourage students to view assessment as a primarily learning orientated activity,
a redesign of the module descriptors is recommended. A new suggested format is supplied in
Appendix 1 for consideration
3.7 Feedback
Within the National Student Survey, the lowest scores occurred in the three questions relating
to feedback (both years). Gibbs and Simpson (2004-05, p14) emphasize in their ‘conditions
under which assessment supports learning’, is that “Sufficient feedback is provided, both
often enough and in enough detail.” They contend that one piece of detailed feedback on an
essay/task at the end of a semester is unlikely to support learning across a whole course.
25
Similarly, Cross (1996) states that excellence in student learning is promoted by three
conditions, one of which is the rapid availability of feedback.
Students in focus group sessions had mixed opinions about the value of feedback and it was
clear there were some fixed perceptions of feedback intention, some recognised:
‘[it] tells you where you went wrong and you can use that information in future
assignments’.
Whereas others saw feedback as pointless:
‘you can’t change your mark and it’s unlikely that you will be able to use feedback from
one module to help another.’
3.7.1 Feedback practice
Some students reported that they did not receive feedback other than a raw mark, with no
indication of how it had been achieved, and no work returned. A case was mentioned in
which a peer of a participant had failed a module, but had no idea why, or how to improve
things in a resit. The programme was assessed entirely by examination and no feedback had
ever been given.
On programmes predominantly assessed by examination, some students explained that they
had virtually no feedback over the whole of their programme. One student said that she
thought it was possible to email a tutor to ask for details, but that it wasn’t customary for
students to do so.
In most cases, feedback did accompany marks and these were provided to students 3 to 6
weeks after submission of work for marking – something that was reinforced by the staff focus
group.
The University recommends that feedback is provided to students within four weeks on an
assignment hand-in date (in the case of examinations there is no formal feedback
requirement). However, although it was evident that many received feedback within this
period, they were clearly forming their own expectations of appropriate response time, as
illustrated by the following example:
One student related an incident about a lecturer who failed to return work ‘on time’ who had
‘freely admitted that she’d been watching TV instead of marking’. On further explanation
however, it turned out that the usual return time was two days, and in this case return time
became 9 days - well within University standards - yet implicit expectations resulted in this
being perceived as unacceptable.
There were also clear mismatches between the use of feedback once it had been received.
The general feeling amongst staff was that time spent on giving detailed feedback was often
wasted as students were only really interested in the mark. Examples of drawers full of
uncollected assignments were cited.
A student also expressed this sentiment:
26
“Once you have had your mark, that is it. A lot of people would not be bothered about a
model answer. They would probably think, well, I have already done the work I am not
going to be able to do it again.”
Additionally mismatches between student and staff views can be illustrated by an example
taken from LAW: students in the focus groups felt that they got very little feedback, and didn’t
feel that they knew how well they were doing before they got results of examinations. Yet in
the staff focus group a staff member from the same School reported on the mixed take-up of
feedback amongst students, ‘to the despair of staff’. As explained, common practice within
the School was to utilise a lecture to go over an assignment (outlining where the best marks
came from, and where marks were lost etc). Students were informed that these sessions
were generic and that students should arrange individual feedback tutorials with staff. It was
reported that many do not make any such appointments.
Staff members from other Schools also noted the lack of engagement on the part of the
students. A staff member in PSS noted:
‘Wherever possible, we have feedback sessions, at which students get their work
back, and the module leader provides verbal (and possibly written) feedback. These
sessions tend not to be well attended. Failing that, students can collect their work from
module leaders. I sit surrounded by piles of uncollected work (both mine and my
colleague’s)!!’
Rust (2000, 2002) offers a potential explanation on this. He claims that marks awarded to
assessment and associated feedback, do not in themselves mean very much to students, and
questions the entire established system of grading and degree classification. Some students,
he claims, are not so much concerned about what has been learned and their own strengths
and weaknesses, as with getting a better mark than one or other of their peers.
Another possibility is that deficiencies in any of the three important aspects indicated by Gibbs
and Simpson (2004/5) -namely its quantity, detail or frequency - could render it less valuable
to students and this might result in their lack of concern for (or attention to) it. In addition to
evidence of weak practice in the timing of feedback, there were also indications of deficient
practice in some of these other facets too.
Where feedback was provided within the three-week period, some students explained that
they received carbon copies of handwritten feedback sheets, which weren’t always legible.
On some programmes feedback took the form of ticked boxes indicating where criteria had
been achieved/fully achieved etc. and student participants felt that in this case, more
explanation was necessary.
These experiences mirror those articulated in focus groups investigating feedback. It is clear
that students felt that, whilst academic staff provide feedback to students, it is not on the
whole sufficiently detailed so that it can be utilised by students.
It was evident from student comments in the focus groups that part of their positive/negative
perceptions of feedback were due to its perceived usefulness in improving marks. Gibbs and
Simpson (2004/5:18) state that ‘feedback should be timely in that it is received by students
while it still matters to them and in time for them to pay attention to further learning or receive
further assistance.’
27
In the focus groups concentrating on feedback students themselves made some interesting
suggestions themselves on possible good practice:
One focus group participant suggested the use of a feedback sheet:
‘have a bit ‘areas for improvement’ and then even bullet points of areas you could
focus on. Then maybe areas you did well, just to keep achieving. Sometimes they point
the obvious out in feedback, e.g. ‘paragraph two was extensive’. You know it was, you
wrote it yourself! It is things that you don’t know that you want them to point out!’
There was a suggestion that more marking be conducted, using postgraduate students to
ameliorate the staff workload.
Recommendation:
Feedback is clearly one of the main areas in which students feel enhancements can be made.
Examinations, in particular, are one area worthy of improvement. However, it is recognised
that this will have some implication on staff workload and therefore the following
recommendation should take this fully into account.
It is recommended that individual feedback on their performance should at least be given to
those students who have failed an examination. Consideration should also be given to
extending this to those students gaining a minimal pass mark. For all other students generic
group feedback on performance is recommended, where the VLE is suggested as the best
place for this to be disseminated.
It is also recommended that contact time be utilised to fully explain the feedback received and
explain how this can be transferred from assignment to assignment and module to module;
PDP and guidance tutors may have an important role in facilitating this. For example students
could be asked, in advance of guidance sessions, to present all feedback received on their
programme for discussion with their tutor.
3.7.2. Providing distinguishing feedback
It is important that assessment feedback is created so that it relates to the purpose of the
assignment. For example, if an assignment has been designed to promote generic skills,
then feedback should refer to how they have been developed or could be further developed.
In addition to the purpose of an assignment being clear, feedback should be clearly related to
that purpose (Gibbs and Simpson, 2004/5:19).
Gibbs and Simpson also point out that students need to understand why they have received
the grade or mark they have, and why they didn’t get a lower or higher grade. However,
although some students in the survey explained that they were given an outline paragraph
with each assignment explaining the criteria, others felt that the subject was never discussed.
In the focus groups students made mention of “the magic line of 60% [or 2.1.]” and indeed
seemed to be more focussed on how to get the marks to “pass this magic line” and hence
focussed on the grading criteria. Some participants were even unaware of how mark-bands
related to degree classification.
28
Others agreed that (at least in the early years of their programmes) they had no idea what the
marks they got really meant in terms of degree classification, and their main yardstick was
how well they did in comparison with peers.
Students felt that generic guidelines, such as those in handbooks, were very difficult to apply
to a range of different types of assessment, and that more guidance in how to achieve good
marks in the assignments themselves would be useful. Some examples of how to achieve a
first class mark, an upper second and so on would be very well received.
When this was discussed with staff, they felt that it was difficult to find the right way to explain
the difference between, for instance, a 2:1 essay and a 2:2. The criteria for assessment were
always available – whether in the student handbook, or on assignment briefs; but it was
acknowledged that often the answer the students wanted could given only in post-feedback
i.e. when they had succeeded or failed to meet the criteria. On completion of an assignment,
one member of staff said, in answer to questions from students about how to improve marks,
that it is personal interest and effort that makes the difference between a good and a bad
essay, and that staff were looking for evidence, thoughts and ideas.
In the student focus groups it was clear that their desire was a checklist of criteria that would
result in a 2:1, so that they would know what they were aiming for – a general list would be
helpful, but lists specific to assignments would be better, as it can be difficult to apply general
rules to specific assignments.
However, staff felt that this would be difficult to provide, without doing the work for the
students, and accepted that more specific guidance was given for the dissertation at the end
of the programme than for previous work. However, it is suggested that the use by tutors of
exemplars illustrating the different standards of work may help alleviate this student concern.
At this time, Professor Burgess, on behalf of Universities UK, is chairing a number of groups
who are considering alternatives to the current degree classification. Therefore this national
initiative is likely to address (or at least change) student focus on the “2.1 magic line” and,
because of these potential changes, no specific recommendations are made at this point.
3.7.3. Improving the value of feedback
Carless (2006) points out that the challenge, in increasing the regularity of feedback, is time
and workloads. When lecturers are overburdened with responsibilities, they may perceive
formative assessment/feedback as something of a luxury. However, Carless cites the work of
Lambert & Lines (2000) which points out the inextricable link between formative assessment
and good teaching meaning that staff who say they have not time for it are, in effect, saying
they don’t have time to teach effectively!
Students offered a range of suggestions for improving feedback. A common suggestion was
that it would be better if some feedback (i.e. formative) could be given before the final work for
the module was submitted, and again, the feeling was that feedback cannot be used to help
with subsequent assignments as it was perceived that feedback isn’t related to the next piece
of learning. Students also held a perception of feedback as the information that was given at
the end of an assessment and did not recognise formative assessment and feedback.
The idea of getting feedback in a tutorial was considered by students to be a good thing but it
was recognised that this was not always possible in areas where there were high numbers of
29
students. This system was also held as an ideal by staff who explained that they would love
to have the luxury of being able to give feedback in this manner.
It is recognised that a feedback tutorial system will impact significantly on contact time; a
specific recommendation in this regard would impact on seeking efficiencies in the
assessment process. Therefore the introduction on of a formal feedback tutorial system,
similar to the guidance tutorial but conducted by module tutors, is offered as a consideration
for further exploration at both School and University level.
3.8 Student involvement in assessment
The positive impact of involving the students in the assessment process is often outlined.
Wondrak’s (1993) investigation of health care students along with research by Orpen (1982)
with political science and psychology students demonstrates that students and their peers are
capable of being very reliable assessors of their own work, especially when they have been
involved in the development of the assessment criteria.
Wondrak (1993), for example,
discovered statistically significant instances of agreement between student and peer
perceptions and tutor marks. Similarly Segers and Dochy (2001) identified a significant
interrelation between peer and tutor scores in their investigation.
There are clear examples of peer assessment at Northumbria - the success of which can be
mixed. One staff interviewee noted that:
‘Students can be very involved with the assessment process; the following example is
taken from a level 4 introductory module. Students work in a group and distribute the
marks for a task between themselves after a discussion of their relative contributions.
Students often accept a low mark if the group consensus is that they deserve it. This
process is valued by the students and is regarded as a success’
However, students have also indicated that they are not comfortable with peer assessment
involving judgement, particularly feeling that they are incapable of assessing each other in a
fair way. (Segers & Dochy, 2001). McDowell (1996:161), as part on the Impact on
assessment project, explained that at Northumbria University,
‘Some students believed that there were opportunities for cheating in self- and peer
marking or that there cases where it was not taken seriously’.
Similar expressions of mistrust were expressed by student focus group participants in the
current study:
30
‘In the first year we had a class in which students had to fill out a form marking your
presentation, and there would be some opportunities to give constructive criticism, but
generally it was just ticking boxes with ‘good’, because people didn’t want to be
horrible to their fellow students.’
‘We’ve done a number of presentations where your group of 3 or 4 is asked to peer
evaluate you…I think that the danger with doing a peer evaluation is that you may find
that you may have a group of 4, where 2 dislike the other 2, or 1 dislikes the other 3…
and then it can affect the marks too much… ‘
Currently, student experience of involvement in assessment at Northumbria is not viewed in a
very positive light. Staff were also not generally in favour of peer assessment, as they
accepted that students found this a difficult process. One staff comment, typical of many, was,
‘we tend to ‘shy away’ from it – our perception (if I can generalise) is that it is more
problematic than beneficial!!’
Another explained that:
‘Students are cynical with regard to peer assessment, since they are anxious that
summative marks should only be assigned by the tutor.’
A number of anecdotal examples were also given by both staff and students;
In one instance peer assessment was allocated through marks of one to ten. Students felt
marks ended up being clustered as any mark above 7 or under 5 had to be explained to
tutors, so most marks stayed in that range, and hence was felt to be ineffective.
Staff also reported other instances of ‘bunching’, and tendencies, in the case of teamwork, to
only give a team member less than the others if they had done absolutely nothing towards the
work.
Barfield (2003) comments that, the less group assessment experience that a student has, the
more likely they are to agree that everyone in the group deserves the same group mark. One
such example was reported where students had tried to give each other the same mark but
were told, through staff intervention, that this represented collusion, and were told to do it
again and ensure the marks were inequitably distributed.
Cassidy (2006) also found that students had concerns relating to their capability to assess
peers and to the responsibility associated with assessing peers. They found that, whilst
students would accept peer assessment as an element of their course, its introduction should
focus on the development of evaluative skills, placing an emphasis on learning rather than
assessment, and provide support to alleviate an onerous sense of responsibility.
Boud et al. (1999:421) suggest that asking students to formally assess each other within the
context of a group project can lead to lack of cooperation, as we then ‘implicitly or explicitly pit
one person against another’. Keppell et al (2006) argue that we are sending students
inappropriate messages when we ask them to cooperate in a group and then ask them to
formally assess the contribution of each individual member within the group . Keppell et al
(2006) itemise Eisen’s (1999) six qualities of peer partnerships - voluntary involvement, trust,
31
non-hierarchical status, duration and intensity of the partnership leading to closeness,
mutuality and authenticity.- characteristics which also have relevance for peer assessment.
However, Pope (2005) reports that, from his research, it would appear that being subjected to
self- and peer assessment, while acknowledged to be stressful, leads to improved student
performance in summative tasks, indicating that seeking ways of improving student
perceptions of peer assessment is worth pursuing.
Smith et al (2002) report on an action research project to evaluate an intervention designed to
increase students' confidence in undergraduate peer assessment of posters. The intervention
set out to maximize the benefits of peer assessment to student learning by explicitly
developing and working with marking criteria, and improving the fairness and consistency of
students' marking through a trial marking exercise. Evidence from qualitative evaluation
questionnaires suggested that students' initial resistance to the peer assessment was
transformed by their participation in these processes
Liu and Carless (2006) draw on relevant literature to argue that the dominance of peer
assessment processes using marks can undermine the potential of peer feedback for
improving student learning. They recommend some strategies for promoting peer feedback,
through engaging students with criteria and for embedding peer involvement within normal
course processes. Such an approach was taken by Bloxham and West (2004) who observe
how difficult it is for students to become absorbed into the assessment culture of their
disciplines, recognising that providing written criteria is insufficient to make this tacit
'knowledge' transparent to students. Bloxham and West report on an exercise where
students used assessment criteria to mark their peers work coupled with an assessment of
their peer marking and feedback comments. Analysis of the resultant data indicated
considerable benefits for the students in terms of use of criteria, awareness of their
achievements and ability to understand assessment feedback.
Price et. al. (2001) also claim significant improvement in students’ performance can be
realised by involving them in a marking exercise where they are expected to apply the criteria
against which their own work will be assessed, to a number of other pieces of work.
One member of staff who was interviewed described a similarly innovative approach that they
had adopted (analogous to an experiment conducted by Forbes et. al. 1991) to enable
student understanding of assessment criteria:
‘Particular methods include peer assessment of previous exam and assignment
questions using the marking scheme’
Such discussion and explanation of the criteria with students not only improves understanding
of the process, and how to respond to it, but can also improve its relevance to the learner.
Booth's (1993) findings, based on an investigation involving history students, indicate that
many students are very keen to be more involved in the assessment process and he warns
‘No involvement leads to lack of interest' (Booth 1993:234).
32
Recommendation:
It is recognised that peer assessment is not straightforward but, there are obvious potential
benefits in its use in improving student engagement in learning. It would appear from much of
the literature above that in order to build up student trust, elements of peer assessment
should be introduced in formative assessment activity, which could also have the potential
benefit of reducing staff time which may be required in providing formative feedback. It is also
recommended consideration could be given to the use of previous anonymous examples of
student work within the assessment process to assist student understanding of assessment
criteria.
The CETL are aware of the many difficulties associated with any form of group work and
marking. whether this involves marking by peers or not. In collaboration with a visiting CETL
Associate from Northampton University, CETL have recently embarked on a project
researching the influence of group work on learning. It is anticipated that this collaboration
will yield further recommendations as it progresses.
4. Conclusion and Recommendations
At Northumbria University, there is substantial evidence that students will encounter a number
of different methods of assessment during their programmes of study and this is to be
applauded.
Students show great awareness in the some of pressures placed upon academic staff in
respect to the increasing participation rates in Higher Education. However, students did
demonstrate varying degrees of experience and satisfaction in a number of facets which
make up an assessment process: formative assessment, the marking process, relationship
between assessment and learning outcomes and, most crucially, feedback practice and its
role in improving student learning. The latter point is already well documented through the
National Student Survey results in this area. A response to the first National Student Survey
was prepared by the Centre for Excellence in Teaching and Learning and the
recommendations made in this response will be reiterated here in 4.6.
4.1. Module Descriptors
It has been difficult to utilise module descriptor information fully, because of varying
approaches to their completion. The levels of detail recorded in module descriptors vary
amongst tutors, and, in addition, a variation in the use of terminology also makes them difficult
to compare. In some cases, incomplete descriptors were encountered, further complicating
the analysis. For similar analysis, for other projects, and indeed for its normal purposes, the
Academic Programme Database will continue to serve a limited role unless its data is
improved. The University should review the accuracy and completeness of information
maintained in the database, and further clarify for module tutors, the appropriate way in which
they should be completed. Review procedures should be capable of ensuring that quality
information enters the database, and that it is amended when appropriate.
Module guides which are presented to students contain much of the richer information which
relates to assessment and it is felt that this should be extended to module descriptors.
33
Therefore, it is recommended that module descriptors be amended in order to place
emphasis on “assessment as a process” rather than “assessment as a product”. For example
it is suggested that distinctions between formative and summative assessment should be
avoided in any student information. A timeline could be utilised to describe all forms of
“assessment” activity with indications of when students will be given a “graded summary” and
or when to expect formal feedback. It is recognised that there may be no “simple” way to
record all module activity accurately but an alternative module descriptor is offered for
consideration in Appendix 1.
4.2. Methods of Assessment
The survey illustrated variety across Schools in terms of the type of assessment used. In
some schools concentration on summative approaches were restricted to on one or two
methods It is recognised that, in formative practice, a broader range of methods is very likely
to be employed but Schools may wish to review their assessment practice in this respect to
ensure that a wide variety of methods is employed which allows students to develop as well
as demonstrate their learning.
Within pedagogic literature, examinations are well known to be anxiety associated
assessment methods which, in turn, are linked with the least desirable (surface) approach to
learning (Ramsden, 2003). It is recognised that examinations are a popular requirement in
professional exemptions. However, for programmes where examinations make up more than
40% of the overall assessment it is recommended that clear rationale is made for this choice.
Additionally, programmes should be encouraged to ensure that there continues to be a good
mix of assessment methods employed and that the use of these is spread across a
module/programme’s duration
4.3. Amount and Frequency of Assessment
The average number of pieces of summative assessment was found to be around 1.03 per 10
credits. However, perhaps as an inevitable consequence of the amount of assessment, the
data demonstrate an overwhelming tendency at Northumbria University to schedule
assignments towards the end of assessment periods. Students find this situation stressful. In
part they recognise this as their own lack of appropriate time management but it was also
suggested this could be due to assessment design. It is important that students should not be
overburdened by summative assessment and therefore we recommend that the proportion of
Notional Student Workload relating to summative assessment should remain. However, it is
recommended that the advice relating to the number of pieces of assessment should be
withdrawn. Where practical, module tutors are advised to utilise the concept of frequent but
small assessment tasks. Where this is less practical frequency can be introduced by breaking
down an assignment into smaller tasks or waypoints, spread across the learning period. By
breaking down assessment into smaller chunks it allows staff additional opportunities to direct
students on task over longer periods of time, and to enhance the student learning experience
with feedback that they feel they can digest and utilise in direct relation to the module learning
objectives.
Schools should consider their practice in this respect and encourage module tutors to review
their assessment strategies and timing. Tackling this at the programme level may also be
advisable as this is how the student encounters the collective effect of individual module
34
strategies. If there are concerns relating to attendance issues in relation to assessment these
should be addressed by other means or incorporated in some form into the assessment
process.
4.4. Understanding of Assessment Requirements and Feedback
It is these areas where there appears to be the greatest divergence of views between staff
intention and student perception. Some students felt they were “just left to get on with it”
whereas others recognised the ways in which they were fully inducted and given a number
confidence building opportunities throughout the process. In a number of modules it would
appear that the use of Assessment Briefs (particularly those electronically based) can go
some way to fully explain assessment requirements and illustrate how these relate to learning
outcomes of a module. Therefore it is recommended that assignment briefs are used as
standard University practice. In addition, where time permits, consideration should be given to
use contact time for explaining and expanding upon the assessment requirements.
As expected it is within the area of feedback that both students and staff expressed the
greatest concerns. Students’ perceptions of feedback generally relate to it being recognition
of achievement and this may go some way to explain the frustration expressed by many staff
who reported the multiple pieces of uncollected assessment work. However, it is also clear
that other elements are also affecting feedback effectiveness. Students expect to receive
feedback in a “timely” manner, pressures of marking time, the nature of assignments
themselves and the end-loading of summative work are all factors which influence the time in
which this can be delivered. It was only through probing and discussion that students began
to see the learning benefits of feedback especially its role in feeding forward on subsequent
assignment work and many of the recommendations made previously will go some way to
change student perceptions. It is also suggested that PDP and guidance processes could
also be utilised to increase student awareness of the transferability of feedback received.
Examinations are one area where students do not receive the same level of feedback
associated with other assessment methods and it is therefore recommended that this be
addressed. However, any recommendation made will have a significant implication on
workload it is strongly recommended that this must be taken into account within workload
models. At this stage, it is suggested that individual examination feedback is to be given to
failing students. Once appropriate models to account for this increased workload are
established, feedback should be extended to students gaining minimal pass marks.
Consideration should also be given to utilising the VLE as a platform for passing on generic
feedback on examination performance to all students.
The enhancement of feedback in diverse ways is a substantial element of Assessment for
Learning, as defined by Northumbria’s Centre for Excellence in Teaching and Learning
(CETL), who produced a number of suggestions in response to the National Student Survey
results. These are repeated again here for consideration:
35
Actions for improving provision of written tutor feedback on student assignments
Source: CETL AfL
Suggestions
Comments
Reduce any ‘procedural’ delays in assignment
handling processes such as: time taken for
assignments to reach staff for marking; time
taken to have assignments ready for student
collection.
Some Schools have a central
administrative set up for all assignment handling
whereas in other cases students are required to
collect assignments from academic staff offices.
Both approaches, it appears, can lead to delays
which might be able to be reduced.
Needs to be addressed locally by
academic and administrative staff since
procedures differ
Notify students by email (via the eLearning
Portal) when assignments are ready for
collection. Even if there is no actual speeding
up of the process at least students will be
reminded that their assignments are marked and
ready for collection rather than perhaps being
unaware of this, or assuming that assignments
are not yet marked.
Other suggestions have been to inform
students of their marks by email but
require them to collect the assignments
in person to receive comments (or vice
versa!).
Arrange academic staff work schedules so that
block time is available to do marking at set times
(e.g. teaching timetable arranged to leave clear
block of days for marking)
Both of these would need to be
monitored to see if there was any
speeding up of marking. There is a
possibility that quality and detail of
feedback might also be enhanced.
Collection
of
assignments
from
academic staff may have additional
benefits e.g. informal discussion and
oral feedback on the assignment.
Use a wider range of markers e.g. graduate
Training and guidance
tutors, part-time lecturers to provide more staff
needed for ‘new’ markers
input into the marking process.
Enhance the quality of written comments to
focus on what will be helpful to the student.
Clearly staff intend their comments to be helpful
and do their best in this respect already. Whilst
there is no ‘one right way’ staff development in
this aspect of feedback would be useful.
would
be
Ways forward could be:
Programme/subject teams sharing
experience and practice on written
feedback and developing local good
practice.
Peer review of feedback comments similar to peer observation of teaching.
Moderation of assignments to include
review of comments made (not just
mark awarded)
36
Actions to broaden
Source: CETL AfL
students’
understanding
Suggestions
and
appreciation
of
feedback
Comments
Identify the ways in which feedback on This might not have been done before as
students’
learning
is
provided
in much of what can be termed ‘feedback’
programmes
might be seen as something else – just part
of normal teaching, or student guidance, or
part of student group work
Let students know what kinds of feedback Discussions with students early in their
are available on the programme. This could courses have more potential for direct
be done by:
benefits to learning.
Statements in module and programme
handbooks on how you can get feedback on
your learning, progress etc.
Discussions with students at programme or
module level about the provision of
feedback
Programme Committee or Staff/Student
Liaison Committee discussions about
feedback and how to improve it
When feedback is taking place in a module Examples of times when it might be useful
tell students explicitly that this is happening to remind students that they are going to be
given feedback are: when comments on an
assignment are being summarised and
discussed with a whole class; in practical
sessions where the tutor is going to talk to
students individually about their work.
37
Suggestions
Comments
When feedback is ‘informal’ try to make Students may more quickly forget that they
some kind of record available where that is have ever had feedback if it is not written
possible.
down in some way. Examples:
Students working on a group task in class
on, say, what makes a good assignment?
They could be asked to present a summary
on a poster and time allowed to view
different posters.
Students giving peer feedback could be
asked to complete a short form or checklist
in addition to giving oral comment
Students could be asked to keep an
individual log about their learning progress,
including feedback and their response to it,
as part of a group assignment.
Include feedback, the range and types of
feedback available and how to get the most
from them in study skills (or similar)
modules
Being able to seek out and use feedback
and undertake self-review is an important
kind of study skill. It’s a mistake to think
that all students know how to use feedback.
Some may not even think that they ought to
use it. To some, feedback comments are
just an acknowledgement of what they have
done not something relevant to what they
might do next.
4.5. Student Involvement in Assessment
It is clear that both staff and students recognised some tensions in the use of peer
assessment. This generally occurred where students played a role in marking. Staff tended
to find the process ineffective illustrating a number of examples where students tended to
“play safe” in any distribution of grades to and from peers. Student perceptions indicated that
they often found peer feedback was very dependent on social relationships and hence too
personal in nature. However, these instances related to the peer assessment/feedback in a
summative setting. There are a number of benefits which can be gained from the use of peer
reviews in order to begin to overcome some of the “trust” issues on the part of the students, it
is recommended that module tutors may wish to consider the use of peer assessment as a
formative only learning activity. This could include the use of past examples of anonymous
student work to help develop student confidence in their ability to give feedback, letting
students themselves have an input into the development of some aspects of the assessment
criteria (e.g. weighting) and the use of the Personal Response System (PRS) for giving
anonymous feedback. It is hoped that by increased and multiple exposure to peer review,
38
over time, this could be used effectively in summative settings. It is also suggested that given
the concerns expressed by both staff and students toward peer assessment there may be a
need to conduct further research on the social dimensions on its effectiveness.
4.6. Point for further consideration
As well as the practical tips interspersed throughout the report and reproduced above it is
suggested that staff may also wish to give consideration to the formation of an “assignment
contract of understanding” between themselves and students which:

clarifies policies, procedures and practices.

spells out moderation methods (e.g. assignments are only double marked when they
are borderline pass/fail; randomly sampled assignments are double marked; a square
root of the sample size is double marked etc.);

states the format in which feedback will be given, (e.g. Word or, if handwritten, state
that students should tell their tutors if they cannot read their writing);

stresses that students are welcome to come and see their tutor to discuss feedback;

sets standards (e.g. if your exam grade is a 2.1 or above there is no expectation you
will meet your tutor for oral feedback although s/he is prepared to see you if you wish a
meeting);

spells out constraints under which academics operate (e.g. staff: student ratio; number
of pieces of work a staff member can expect to mark in a month; how long it takes to
mark).

highlights the fact that feedback can be fed forward to other future assignments.
Finally, one of the objectives of this report was to seek potential improvements in the
efficiency and effectiveness of assessment. A number of ways in which assessment may be
made more effective have been highlighted. However, we are aware that efficiency may not
necessarily be seen to immediately go hand in hand with recommendations made. The
element of efficiency is more likely to become clearer as the effective practices suggested
become embedded. Therefore it is suggested that module tutors be asked to reflect on what
efficiencies may be possible as part of the annual review process.
39
Bibliography
Angelo, T
April 3-4
(1996) Transforming assessment: high standards for learning AAHE Bulletin
Bannister, P & Ashworth, P. (1998) Four good reasons for cheating and plagiarism in C.
Rust (ed.) Improving student learning: improving students as learners Oxford: Oxford Centre
for Staff and Learning Development
Barfield, R. L. (2003). Students’ Perceptions of and Satisfaction with Group Grades and the
Group Experience in the College Classroom. Assessment & Evaluation in Higher Education,
28(4), 49-64.
Berliner, D.C. (1984) The half-full glass: a review of research on teaching. In P.L. Hosford
(ed.) Using what we know about teaching Alexandria, Va. : Association for Supervision and
Curriculum Development
Biggs, J. (1999) Teaching for quality learning at University Buckingham: SRHE/OUP
Bloxham, S. & West, A. (2004) Understanding the rules of the game: marking peer
assessment as a medium for developing students' conceptions of assessment. Assessment &
Evaluation in Higher Education, 29 (6), 721 – 733
Booth, A. (1993) Learning history in university: student views on teaching and assessment,
Studies in Higher Education, 18(2), l993, 227-235.
Boud, D., Cohen, R. & Sampson, J. (1999) Peer learning and assessment, Assessment &
Evaluation in Higher Education, 24(4), 413–426.
Brown, G. (1997) Assessing student learning in higher education London: Routledge
Brown, S. & Gilasner, A. (1999) Assessment matters in Higher Education Buckingham:
SRHE/OUP
Brown, S. & Knight, P. (1994) Assessing learners in higher education London: Kogan Page
Carless, D. (2006, forthcoming) Differing perceptions in the feedback process, Studies in
Higher Education.
Carroll, J. & Appleton, J. (2001) Dealing with plagiarism: a good practice guide available at
http://www.jisc.ac.uk/01pub/brookes.pdf (last accessed
July 6th 2006)
Cassidy, S. (2006) Developing employability skills: peer assessment in higher education.
Education & Training, 48 (7) 508-517
Chevins, P.F.D (2005) Lectures Replaced by Prescribed Reading with Frequent Assessment:
Enhanced Student Performance in Animal Physiology Bioscience Education eJournal 5 May
np http://www.bioscience.heacademy.ac.uk/journal/vol5/beej-5-1.htm (accessed November
2006)
Cole, S. & Kiss, E. (2000) What can be done about student cheating? About Campus MayJune 5-12
Cross, K. P. (1996) Improving teaching and learning through classroom assessment and
classroom research, in Gibbs, G. (ed.) Improving student learning: using research to improve
student learning. (Oxford, Oxford Centre for Staff Development), pp 3-10
40
De Corte, E. (1996). Active learning within powerful learning environments/ Actief leren
binnen krachtige leeromgevingen. Impuls, 26 (4), 145-156.
Drew, S. (2001). Perceptions of what helps learn and develop in education. Teaching in
Higher Education, 6 (3), 309-331.
Eisen, M. J.(1999) Peer learning partnerships: a qualitative case study of teaching partners’
professional development efforts. Unpublished doctoral dissertation : Teachers College,
Columbia University.
Entwistle, N. J., & Entwistle, A. (1991). Contrasting forms of understanding for degree
examinations: the student experience and its implications. Higher Education, 22, 205-227.
Freeman, R. & Lewis, R. (1998) Planning and implementing assessment London: Kogan
Page
Fritz, C.O., Morris, P.E., Bjork, R.A., Gelman, R. & Wickens, T.D. (2000) When further
learning fails: stability and change following repeated presentation of text. British Journal of
Psychology 91, 493-511
Forbes, D. & Spence, J. (1991) An experiment in assessment for a large class, in R.Smith
(ed) Innovations in engineering education London: Ellis Horwood.
Great Britain. National Committee of Enquiry into Higher Education. (1997)
Report of the National Committee of Enquiry into Higher Education Chairman: Sir Ron
Dearing London: H.M.S.O. Available at: http://www.leeds.ac.uk/educol/ncihe/docsinde.htm
(last accessed July 6th 2006)
Gibbs, G. (1992) Improving the quality of student learning Bristol: TES
Gibbs, G and Simpson, C. (2004/5) Does your assessment support students’ learning?
Available at: http://www.open.ac.uk/science/fdtl/documents/lit-review.pdf (last accessed 6th
July 2006)
Harris, D. & Bell, C. Evaluating and Assessing for Learning, Kogan Page, London, 1990.
Keppell, M., Au, E., Ma, A. & Chan, C (2006). Peer Learning and Learning-Oriented
Assessment in Technology-Enhanced Environments. Assessment & Evaluation in Higher
Education, 31 (4), 453-464
Lambert, D. & Lines, D. (2000). Understanding assessment: Purposes, perceptions, practice.
New York: Routledge Falmer.
Liu, N-F. & Carless, D. (2006) Peer feedback: the learning element of peer assessment.
Teaching in Higher Education, 11 (3) 279-290
Marton, F. & Säljö, R. (1990) Approaches to learning. In F. Marton, D. Hounsell & N.
Entwistle, The experience of learning Edinburgh: Scottish Academic Press.
Marton, F., & Säljö, R. (1997). Approaches to learning. In F. Marton, D. Hounsell, & N.
Entwistle (Eds.), The experience of learning. Implications for teaching and studying in higher
education [second edition] (pp. 39-59). Edinburgh: Scottish Academic Press.
McDowell, L (1996) Enabling student learning through innovative assessment In G. Wisker,
S. Brown (Eds.), Enabling student learning (pp.158-165) Kogan Page
41
McDowell, L. and Sambell, K. (1999) The experience of innovative assessment: student
perspectives in S. Brown and A. Galsner (eds), Assessment Matters in Education (pp.71-82)
OU Press
Orpen, C. (1982) Student versus lecturer assessment of learning: a research note Higher
Education 11, 567-572
Penny, A.J. & Grover, C. (1996) An analysis of student grade expectations and marker
consistency. Assessment 7 evaluation in Higher Education V.21 (2)
Pope, N.K.L. (2005) Impact of stress in self- and peer assessment. Assessment & Evaluation
in Higher Education, 30 (1) 51-63.
Price, M., O’Donovan, B. & Rust, C. (2002) Strategies to develop students’ understanding
of assessment criteria and processes. In C. Rust. (ed.) Improving student learning 8:
Improving student learning strategically. Oxford: OCSLD
Quality Assurance Agency (2000) Code of practice for the assessment of academic quality
and standards in higher education. Section 6: Assessment of students available at :
www.hdfhghfghgfFE
(Last accessed: July 6th 2006)
Race, P. The art of assessing Available at:
http://www.londonmet.ac.uk/deliberations/assessment/art-of-assessing.cfm (last accessed
July 6th 2006)
Race, P. (1994) Never mind the teaching feel the learning SEDA paper 80 Birmingham:
SEDA
Ramsden, P. (1992) Learning to teach in higher education London: Routledge
Ramsden, P (2003), Learning to Teach in Higher Education, Second edition, Routledge
Farmer, London
Rust, C. (2000) An opinion piece: a possible student-centred assessment solution to some
of the current problems of modular degree programmes Active Learning in Higher Education
1(2) 126-131
Rust, C. (2002) The impact of assessment on student learning: how can the research
literature practically help to inform the development of departmental assessment strategies
and learner-centred assessment practices? Active learning in higher education 3(2) 145-158
Säljö, R. (1975). Qualitative differences in learning as a function of the learner's conception of
a task. Gothenburg: Acta Universitatis Gothoburgensis.
Segers, M., & Dochy, F. (2001). New assessment forms in Problem- based Learning: the
value- added of the students' perspective. Studies in Higher Education, 26 (3), 327-343.
Smith, H., Cooper, A. & Lancaster, L. (2002) Improving the quality of undergraduate peer
assessment: a case for student and staff development. Innovations in Education and
Teaching International. 39 (1) 71-81.
Struyven, K., Dochy, F. & Janssens, S. Students’ perceptions about evaluation and
assessment in higher education: a review. Assessment & Evaluation in Higher Education 30
(4), 325-341
Toohey, P.G. (1999) Designing courses for higher education Buckingham: SRHE/OUP
42
Trotter, E. (2006) Student perceptions of continuous summative assessment. Assessment
and Evaluation in Higher Education, 31 (5) 505-521
Vos, P. (1991) Curriculum Control of Learning Processes in Higher Education,
13th International Forum on Higher Education of the European Association for
Institutional Research, Edinburgh.
Wondrak, R. (1993) Using self and peer assessment in advanced modules Teaching News
22-23
43
Appendix 1: Suggested Format for Module Descriptor
Shaded areas indicate where changes have been made
Northumbria University
School of xxxx
Form MD
MODULE DESCRIPTOR
See guidelines for completion
1
Title of new module
(Note that this should be not more than 55 characters including
blanks)
2
Module code
3
Academic year in which
module will be delivered
for the first time
4
Credit Points
5
Module level
6
Type:
year long /
semester
based
7
Academic year and
semester when module
will be delivered for the
first time
8
Subject Division (where
relevant)
9
Module Tutor
10
This module has the following
delivery modes at the locations
shown:
Delivery mode
Where the module is intended for
distance learning
or
distance
delivery please indicate below:
Location of delivery
44
11
Aims of module
Specified in terms of the general aim of the teaching in relation to the
Subject
12
Learning outcomes
Specified in terms of performance capability to be shown on
completion of the module
13
Outline syllabus
The content of the module identified in a component listing
14
Teaching Activities
General Description of contact sessions
45
15
Learning Strategy
Description of directed learning activities recommended including any
assessment related activity.
16
Indicative reading list or other learning resources
46
17
Notional student workload (NSW) for this delivery pattern:
Note: please complete a separate section 17 for each mode of delivery.
Mode of delivery(e.g. part
time, full time, distance
learning)
Lectures
hours
Seminars
hours
Tutorials
hours
Laboratory work
hours
Placement/work experience
hours
learning
The remaining sections are recommended estimates of student time
required
Directed learning (non
hours
assessment related) e.g.
reading
Learning-orientated
hours
assessment
Estimated time required for
hours
graded assessment (should
be no more than 20% of total)
Other
Total workload
hours
100
200 etc
hours
Details of other hours
indicated
47
18
Assessment Timeline
The Assessment Description should include the length (in hours) of any examination
Wk
Description of Activity
Feedback / Grading
Please Flag with * if this is to require external
Point?
scheduling e.g. Examination
inc contributing
percentage
Semester 1
Semester 2
Semester 3
S1
S2
S3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Note: Examination week cannot be guaranteed and should be used as an indicator only.
STUDENT GUIDE INFORMATION
19
Synopsis of module
A brief overview of aims, learning outcomes, learning and teaching
activities
Description of how module grade is awarded
48
20
Pre-requisite(s)
Any module which must already have been taken at a lower level, or
any stipulated level of prior knowledge required
21
Co-requisite(s)
Modules at this level which must be taken with this module
22
Implications for choice
Possible follow-on modules, or exclusions, or modules which require
this one as a prerequisite
23
Distance learning delivery
Please enter the specific resources required for distance delivery of
the module e.g. materials, communication facilities, hardware,
software etc
24
Date of SLT Approval
25
Date of entry to APDb/relevant
system
49
Appendix 2: Schedule of Interview / Focus Group Questions for Students
Timing and Methods
How are your assessment tasks currently scheduled over your programme?
How would you ideally like to see assessments scheduled?
How do you feel about the variety of assessment methods you have to complete? Is an
appropriate range of assessment methods used?
Describe your most preferred method of assessment and why this is.
Describe your least preferred method of assessment and why this is.
Describe your views on student involvement in the assessment process i.e. Self or Peer
assessment
Formative
What do you understand by the term formative assessment? (Researcher then provide a
definition of formative assessment)
Could you provide example(s) of formative assessment used on your programme
How do you feel formative assessment could be used to help you learn better?
How do you think formative assessment and summative assessments could be effectively
combined?
Constructive Alignment
How are you made aware of the learning outcomes that individual assignments are
assessing?
How are you made aware of the mark/grade definitions used in assessment?
How are you made aware of how skills based learning outcomes are incorporated into
assessments?
How do you feel about the marking process?
What improvements do fell could be made to any of the above
Instruction
What and how much instruction do you feel is required for an assessment? Do you think this
changes over time? If so, in what way?
What do you understand as constituting plagiarism on assignments?
What information would you like to receive on plagiarism?
Feedback
How do you view the feedback you received in terms of:
when you receive the feedback
the content of the feedback
helps clarify things you do not understand
how you use the feedback
What improvements could be made to the feedback you receive?
Closing
Are there any other specific areas about your assessments which you wish to pass comment
on?
50
Appendix 3: Schedule of Interview / Focus Group Questions for Staff
Please say which programme you represent, and which School it is in.
What is a typical number of students on your programme?
How are assessments scheduled across programmes?
Are timings left to module tutors, or is there co-ordination?
What forms of assessment do you use on your programmes?
What is the usual interval between students handing in work and getting it back?
Do you manage expectations about this?
What form does feedback take? Are there consistencies across the programme, or do
module tutors decide for themselves?
What about formative assessment? Is there a policy to include it in all modules? Are your
students aware of formative assessment?
How do you get feedback to students?
How are students made aware of what is required of them? Eg the mark/grade definitions?
How do they know, for instance, how to get a 2:1 grade for an assignment? Are the marking
criteria explicit, and how are they used to inform the feedback?
How are students made aware of the assessment process itself? (eg double/sample marking,
the role of external examiners etc)
Do you use peer assessment on your programme? If so, do you give guidelines to students,
and what are these?
What advice do you give about plagiarism? Do you feel that students understand it? Is it
effective? How many instances are typically discovered in a year?
Does your programme have examinations?
deliberate one? On what was it based?
Was the decision to include/avoid them a
Anything you would like to add about the assessment process?
51
Download