Active Learning in Higher Education

Active Learning in Higher Education
http://alh.sagepub.com
Self-, peer- and teacher-assessment of student essays
Sari Lindblom-ylänne, Heikki Pihlajamäki and Toomas Kotkas
Active Learning in Higher Education 2006; 7; 51
DOI: 10.1177/1469787406061148
The online version of this article can be found at:
http://alh.sagepub.com/cgi/content/abstract/7/1/51
Published by:
http://www.sagepublications.com
Additional services and information for Active Learning in Higher Education can be found at:
Email Alerts: http://alh.sagepub.com/cgi/alerts
Subscriptions: http://alh.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav
Citations (this article cites 20 articles hosted on the
SAGE Journals Online and HighWire Press platforms):
http://alh.sagepub.com/cgi/content/refs/7/1/51
Downloaded from http://alh.sagepub.com at ROOSEVELT UNIV LIBRARY on June 28, 2008
© 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
active learning
in higher education
Self-, peer- and
teacher-assessment
of student essays
Copyright © 2006 SAGE Publications
(London, Thousand Oaks, CA and
New Delhi)
Vol 7(1): 51–62
DOI: 10.1177/1469787406061148
ARTICLE
S A R I L I N D B L O M - Y L Ä N N E , H E I K K I P I H L A JA M Ä K I
& TO O M A S KOT K A S University of Helsinki, Finland
This study focuses on comparing the results of self-, peerand teacher-assessment of student essays, as well as on exploring
students’ experiences of the self- and peer-assessment processes.
Participants were 15 law students. The scoring matrix used in the study
made assessment easy, according to both teachers and students alike.
Self-assessment was sometimes considered difficult, because the
students felt it impossible to be objective when considering their own
work. In peer-assessment, the students found it difficult to be critical
when assessing the essay of a peer. The students found it easier to assess
technical aspects of the essays when compared to aspects related to
content.
K E Y WO R D S : essays, peer-assessment, scoring matrix,
A B S T R AC T
self-assessment, teacher-assessment
Self- and peer-assessment in higher education
The assessment of student learning in higher education has gone through
a shift from traditional testing of knowledge towards assessment of learning
(Dochy et al., 1999; Segers et al., 2003). An assessment culture aims at assessing the acquisition of higher-order thinking processes and competencies
instead of factual knowledge and low-level cognitive skills, as was the case
in a testing culture (Birenbaum and Dochy, 1996; Gulikers et al., 2004). In the
assessment culture, there is an emphasis on aligning assessment with
instruction and giving students ample opportunity to receive feedback
from their learning. Students should also have an active role in the learning
and assessment processes. This requires that students have skills to regulate
their studying and reflect on their learning results and practices. Furthermore, students need to develop strategic learning behaviour in order to
choose the most effective study strategies and practices to deal with the
demands of their learning environments (Biggs, 1999; Boud, 1992;
Lindblom-Ylänne and Lonka, 2001; Maclellan, 2004; Segers et al., 2001;
51
Downloaded from http://alh.sagepub.com at ROOSEVELT UNIV LIBRARY on June 28, 2008
© 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N
7(1)
Segers et al., 2003). However, to change the testing culture into an assessment culture, more research is needed on various aspects related to the
quality of the new assessment culture (Segers et al., 2003).
Self- and peer-assessment are increasingly used in higher education. Selfassessment refers to the process in which students assess their own
learning, particularly their achievements and learning outcomes. Peerassessment, on the other hand, refers to assessment practices in which peers
assess the achievements, learning outcomes or performances of their fellow
students. Self- and peer-assessment are more than students grading their
own or a peer’s work, because the assessments involve students in determining what high-quality learning is in a specific case (Boud, 1995; Brown
et al., 1997; Dochy et al., 1999; Topping, 2003). Both self- and peerassessment can be considered as learning tools, because they are part of a
learning process where different skills are developed. It is claimed that it is
beneficial for students’ learning to be involved in giving and receiving
feedback because it enhances the development of skills required for
professional responsibility, judgement and autonomy, and because it
emphasizes the responsibility of the students in the learning and assessment processes. Peer-assessment can further act as an exercise in which
students can both practise assessment and observe how other students
evaluate the results of learning (Boud, 1995; Brown et al., 1997; Dochy et
al., 1999; Gale et al., 2002; Hanrahan and Isaacs, 2001; Magin and
Helmore, 2001; Orsmond et al., 1996; Segers et al., 2001; Topping, 2003).
Self- and peer-assessment can be either summative, thus concentrating on
judging learning results to be correct or incorrect or assigning a quantitative grade, or formative, if they concentrate on in-depth qualitative assessment of different kinds of learning results (Topping, 2003). In particular,
peer-assessment should be formative in nature in order to enhance learning
(Gale et al., 2002; Sluijsmans et al., 2002), because summative peerassessment can undermine cooperation between students (Boud, 1995).
When analysing the accuracy of self- and peer-assessment (students’
own assessments), teachers’ ratings are usually considered as the reference
point (Magin and Helmore, 2001; Topping, 2003). However, there is
evidence that teacher-assessments vary considerably (Topping, 2003; Magin
and Helmore, 2001) and that comparison between teachers’ and students’
marks can be misleading because of different understandings of assessment
criteria (Orsmond et al., 1996, 1997, 2000, 2002). Study success and study
phase have been shown to be related to the reliability of self-assessment
(Boud, 1995; Dochy et al., 1999). Good students seemed to have a
tendency to underrate their performance, whereas weaker students tended
to overrate their performance (Dochy et al., 1999; Lejk and Wywill, 2001).
Dochy et al. (1999) further showed that self-assessment skills seemed to
52
Downloaded from http://alh.sagepub.com at ROOSEVELT UNIV LIBRARY on June 28, 2008
© 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
L I N D B L O M - Y L Ä N N E E T A L . : E S S AY A S S E S S M E N T
develop during different phases of the studies, because advanced students
could predict their performance better than novices.
The accuracy of self-assessment seems to vary according to the focus of
assessment. A review of research on self-assessment which concerns qualitative analysis of learning products, such as essays, shows that students are
very accurate in grading their own essays (Dochy et al., 1999). In contrast,
however, according to Topping (2003), self-assessed grades tend to be
higher than staff grades. Taken together, assessment of one’s own performance and behaviour seems to be more unreliable than assessing one’s own
learning products. Furthermore, critical analysis of one’s performance
appears to be more difficult than evaluating a peer’s performance in a group
(Goldfinch, 1994; Segers and Dochy, 2001). There is also evidence that
students overestimate their capabilities in self-assessment in addition to
overestimating their performance, as compared with teacher-assessments
(Zoller and Ben-Chaim, 1997). Furthermore, self-assessment which
concentrates on effort instead of achievement has been shown to be
particularly unreliable (Topping, 2003).
All this demonstrates that there is contradictory evidence as to the
reliability and validity of self-assessment. The same can be said for peerassessment, too (Brown et al., 1997; Segers and Dochy, 2001; Magin and
Helmore, 2001; MacKenzie, 2000; Topping et al., 2000; Topping, 2003).
It seems that students and teachers still appear to have different understandings of individual assessment criteria, despite the inclusion of verbal
and written briefings before the start of the assessment process (Orsmond
et al., 1996, 1997, 2000) and that this suggests that further research is
necessary in order to explore these assessment practices. There have been
various attempts to address these problems. There is, for example, evidence
that application of specific criteria (Miller, 2003), transparency in assessment processes (Rust et al., 2003; Taras, 2001) and good instructions and
training enhance assessment skills of students (Sluijsmans et al., 2002) can
help, and also that the use of a scoring matrix may be helpful (Buchy and
Quinlan, 2000). However, when exploring assessment practices further, it
is important to ensure that the issue of the stress and discomfort that
students have reported when having their work marked by a peer is taken
into account (Hanrahan and Isaacs, 2001; Pope, 2001).
Given the need to explore such assessment practices further, the case study
presented here aims to shed light on self-, peer- and teacher-assessment in
the context of the marking of student essays. In particular, we seek to further
explore whether the use of a matrix might enhance the accuracy of self- and
peer-assessment of essays. Given the negative experiences that students have
reported with regard to peer-assessment, another aim of our research is
to explore the perceptions and experiences of students in relation to
53
Downloaded from http://alh.sagepub.com at ROOSEVELT UNIV LIBRARY on June 28, 2008
© 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N
7(1)
peer-assessment in particular. We also seek to explore how both students and
teachers perceive their experiences in relation to these practices.
Method
Participants
The participants in this study were 15 law students who were attending a
problem-based course ‘The History of Law’ in the spring of 2002. Of these,
only one was a male student. Five of these students were first-year students,
six were second-year students and four were more advanced students.
Eleven students answered a questionnaire. There were two teachers in the
course, each tutoring a group of seven or eight students.
Materials
The ‘History of Law’ course was designed according to the principles of
problem-based learning (e.g. David et al., 1999; Savin-Baden, 2000). The
three-week intensive course consisted of three four-hour tutorials. The
trigger material of the course consisted of two written problems associated
with European legal history; the second problem was based on the first one.
Throughout the course the students wrote learning journals. At the beginning of the course, the students were given both oral and written instructions about how to write the journals. In the lengthy written instructions,
the idea and purpose of writing a learning journal was explained to the
students, and they were advised to write regularly, aim at critical and ‘deeplevel thinking’, and to include their personal views, experiences and
feelings and to justify their own views and comments.
The journals were not graded, but the critical essays which students
wrote on the basis of the learning journals were given grades. Students had
to ‘transform’ their lengthy journals into 10-page essays during the four
weeks after having written the journals. The transformation called for active
elaboration of what they had learned in the course, pulling together the
essentials, and the formation of their own critical view of the subject matter.
The students were given instructions about how to transform their learning
journals into critical essays and told the purpose of the task. The students
received the assessment criteria at the beginning of the course in the form
of a matrix (see more later in this article).
Procedures
Three people graded each critical essay. First, the student graded her or his
own essay. Second, the student graded an essay of one of their peers. Third,
the teacher graded all essays. The students were provided with ample
opportunities to discuss and ask questions about the criteria during the
54
Downloaded from http://alh.sagepub.com at ROOSEVELT UNIV LIBRARY on June 28, 2008
© 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
L I N D B L O M - Y L Ä N N E E T A L . : E S S AY A S S E S S M E N T
course. They also used ‘empty’ versions of the matrix when assessing their
own and a peer’s essays (Table 1).
The self-, peer- and teacher-assessments were carried out independently
of each other. The assessment procedure was blind. First, the students
returned their critical essays with their own self-assessment to the teacher
and the copy of the essay to one peer who was randomly selected to be the
rater. The self-assessment matrices were not shown to peers, and the
procedure was such that the students would not assess their peer’s essay
reciprocally. Furthermore, the teachers did not go through the self- and
peer-assessment results before first having assessed the essays themselves.
Two weeks were reserved for the assessment procedure. Each criterion was
scored in a four-point scale from ‘fail’ to ‘excellent’. The final grade was the
mean score of self-, peer- and teacher-assessment. However, the teachers
informed the students at the beginning of the course that they would not
Table 1 The scoring matrix and the criteria for self-, peer- and
teacher-assessment
Assessment
criterion
Excellent grade
Good grade
Satisfactory
grade
Fail
Key issues and Relevant issues
themes included included
Most relevant
issues included
Mistakes and
irrelevant facts
included
Severe mistakes
and irrelevant
facts
Coherent
general picture
Thorough
understanding of
how events are
linked
Understanding
of how events
are linked
Some
understanding of
how events are
linked
No general
picture formed
Independent
thinking
Independent
Some
thinking and
independent
analytic approach thinking
Little
independent
thinking
No
independent
thinking
Critical thinking
Critical evaluation Attempts at
Very little effort
and thinking
critical evaluation in critical
evaluation
No effort in
critical evaluation
Use of literature
Several
references, active
search of
references
Only ‘the main
reference’
No references,
except
discussions
Appearance
Tidy, accurate
Tidy, some
use of references inaccuracies in
the use of
references
Untidy, clear
inaccuracies in
the use of
references
Untidy,
inaccurate use
of references
Length
9–11 pages
Two pages too
long or short
More than 2
pages shorter or
longer
Includes
references other
than ‘the main
reference’
One page too
long or short
55
Downloaded from http://alh.sagepub.com at ROOSEVELT UNIV LIBRARY on June 28, 2008
© 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N
7(1)
share ‘the final decision’ concerning the grade (which they fortunately did
not have to do). This was done in order to avoid conscious over- or undermarking of one’s own or a peer’s essays. A maximum of two ‘fails’ was
allowed in order for the student to pass the course.
Data collection and analyses
One week after the course the two teachers of the course were interviewed
by the first author. The semi-structured interviews concentrated on the
teachers’ experiences of the use of the matrix and on their experiences of
the use of self- and peer-assessment. The students were sent a questionnaire
one month after the course in which they were asked about their experiences of the use of the matrix, self-, peer- and teacher-assessment, and of
the process of writing learning journals and transforming them into critical
essays. All questions were open-ended. The first author constructed both
the interview questions and the questionnaire. The analyses of the teachers’
interviews and the students’ open-ended answers were conducted in two
stages, but separately from each other. First, the aim was to identify all variation in the teachers’ and students’ experiences. After that, categories of
description were formed on the basis of teachers’ and students’ answers.
Second, the categories, which were formed during the first stage, were used
to classify all the answers to ensure that they captured the full variation of
the data. The first author was responsible for the analyses, which were
carried out immediately after the interviews and after the questionnaires
had been returned.
Results
Experiences of the writing process
Both the students and the teachers considered the idea of transforming the
learning journals into critical essays as worthwhile. There were no draft-like
texts, and the overall quality of the essays was good. The students had only
positive comments of the writing process. In general, the writing process was
experienced as both demanding and rewarding. Out of the 11 students who
answered the questionnaire, six emphasized that the most difficult aspect was
to transform the learning journals into the critical essays. Moreover, the
students considered the demands for independent and critical thinking as
particularly challenging. Four of these six students also mentioned that the
limited length of the essay caused difficulties in the writing process. The
following excerpt well represents the students’ experiences:
It was a lot of work and I’m very pleased with my accomplishment. I really
learned. Most difficult for me was to transform the learning journal into a critical
essay because of the length constraint. (Student 1, a second-year female student)
56
Downloaded from http://alh.sagepub.com at ROOSEVELT UNIV LIBRARY on June 28, 2008
© 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
L I N D B L O M - Y L Ä N N E E T A L . : E S S AY A S S E S S M E N T
Comparisons among the results of self-, peer- and
teacher-assessment
Comparisons among the results of self-, peer- and teacher-assessment
showed that they were quite similar to each other. The mean scores of
self-, peer- and teacher-assessment in each criterion is presented in
Figure 1.
The comparisons showed that there were fewer differences among
self-, peer- and teacher-assessment in the three more technical criteria of
the matrix (the last three) than in the criteria related to the context and the
depth of processing knowledge. The students’ and the teacher’s responses
were broadly similar on the scores for the use of literature and on the scores
for the length. Furthermore, the students themselves and the teacher were
unanimous on the score for appearance, but the peers were more critical.
In the content- and process-related aspects of the matrix, there were more
differences. Concerning the scores for key issues included in the essays, and
the scores for coherence of the general picture formed, there were only
minor differences. The biggest differences were among scores for independent and critical thinking. On these two criteria, the peer gave the
highest and the teacher the lowest scores.
Experiences of self- and peer-assessment
In general the teachers’ experiences of the triadic assessment procedure
were very positive. They thought that the use of both the matrix and selfand peer-assessment in addition to the teacher’s assessment worked very
well. The students’ experiences of self-assessment varied. The experiences
were evenly divided into three categories. Four students considered
3
2.5
2
1.5
1
0.5
0
self
peer
ce
th
ap
pe
ar
le
an
ng
e
ur
at
ca
l
er
lit
nd
pe
de
in
cr
iti
t
en
l
ra
ne
ge
ke
y
is
su
es
teacher
Figure 1 Mean scores of self-, peer- and teacher-assessment in each
assessment criterion of the scoring matrix (N = 15)
57
Downloaded from http://alh.sagepub.com at ROOSEVELT UNIV LIBRARY on June 28, 2008
© 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N
7(1)
self-assessment to be easy. The following excerpt is representative of these
students’ experiences:
It was very easy to assess one’s own work. Some students might try to give
higher grades in the self-assessment to receive a higher grade, but I cannot do
this. I know very well my own skills. I want to be honest in self-assessment.
(Student 3, a second-year female student)
Three students regarded self-assessment as being both difficult and easy.
These students felt that it was easy to assess the technical aspects of their
own essay, but much more difficult to assess the content-related criteria.
Furthermore, three students considered self-assessment as difficult, particularly because they felt it was impossible to be objective. These students
thought that self-assessment easily becomes more critical than peer-assessment. Student 4 compared the results of her self- and peer-assessment and
found the following differences:
It is always difficult to evaluate yourself without being too critical. However,
you cannot be objective when assessing yourself. Thus, I think that my selfassessment is more critical than my peer-assessment. (Student 4, a second-year
female student)
The students’ experiences of assessing their peers also varied. The
majority of the students considered peer-assessment to be easy, mainly for
two reasons. First, all students had already written their own essay before
assessing their peer’s essay. Thus, they had studied and reflected upon the
theme of the essay thoroughly. This is how one student reflected on peerassessment:
It was very interesting to read what and how another student had written. Peerassessment wasn’t very difficult. I had already written my own essay and had
thought and reflected upon the content and the writing process a lot. (Student
5, a second-year female student)
Four students had noticed that it was more difficult to be critical in peerassessment than in self-assessment, but, in general, they considered peerassessment to be easy. Three students also mentioned difficulty in assessing
peers’ essays in depth, because they had not always read the same references
as their peer. Four students had experienced similar difficulties in peerassessment as in self-assessment. More precisely, they thought that it was
easy to assess the technical aspects and more difficult to assess the contentrelated criteria. Being assessed by one’s peer was a positive experience for
all students. In general, the students thought that their peer had been fair
and they trusted the peer’s assessment. The following excerpt illustrates the
students’ experiences:
58
Downloaded from http://alh.sagepub.com at ROOSEVELT UNIV LIBRARY on June 28, 2008
© 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
L I N D B L O M - Y L Ä N N E E T A L . : E S S AY A S S E S S M E N T
I felt good that a peer, who had gone through the same writing and learning
process as I had, and who had finished the critical essay, too, assessed my essay.
By applying peer-assessment, students’ points of view are taken into account.
By this I mean that another student knows what kind of grades we expect to
receive on the basis of our work. A peer also knows what it was like to study
legal history in the course. (Student 2, a second-year female student)
To conclude, the results showed that both the teachers and the students
had mostly positive experiences of the assessment processes. The students
confronted different kinds of difficulties in self- and in peer-assessment.
Some students felt that it was difficult to be objective towards oneself. On
the other hand, some students found it difficult to be critical towards a peer.
In both assessment modes, the students found it easier to assess technical
aspects of the essays when compared to aspects related to content skills. All
students felt that a peer’s assessment of their own essay was fair.
Discussion
This study shows that the results of self-assessments were very similar to
the results of peer- and teacher-assessments in the literature extensively
reviewed by Dochy et al. (1999). This contrasts with Falchikov and Boud
(1989), however, who found that self-assessed grades tended to be higher
than staff grades. Furthermore, the results did not confirm the tendency of
over-marking in peer-assessment that was found in previous research
(Magin and Helmore, 2001; MacKenzie, 2000; Topping et al., 2000;
Topping, 2003).
The specific criteria and good instructions for students seemed to
enhance the accuracy of self- and peer-assessment, as was also shown in
previous research (Buchy and Quinlan, 2000; Miller, 2003; Rust et al,
2003; Sluijsmans et al., 2002; Taras, 2001). The results of the present study
thus differed from those of Orsmond et al. (1996, 1997, 2000) who argue
that not even good instructions can remove differences in ways students
and teachers understand assessment criteria.
In general, both the teachers’ and the students’ experiences of self- and
peer-assessment were very positive. The students were motivated by both
self- and peer-assessment. They were eager to read a peer’s essay and to be
able to compare their own essay with that of another student’s essay. These
results were in line with previous research (Hanrahan and Isaacs, 2001;
Orsmond et al., 1996; Pope, 2001; Sluijsmans et al., 2002). The problems
of peer-assessment, including difficulty in being objective when assessing
a peer and the unfamiliarity with references a peer had used, were also
shown in Hanrahan and Isaacs’ study (2001). However, previous research
has reported other problematic aspects which were not found in this study:
59
Downloaded from http://alh.sagepub.com at ROOSEVELT UNIV LIBRARY on June 28, 2008
© 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N
7(1)
discomfort and stress of having a peer read one’s own paper, fears that peers
would be too critical and experiences that the assessment process was too
time-consuming (Hanrahan and Isaacs, 2001; Pope, 2001).
The study has many limitations: particularly, the number of students was
very small, there was a gender imbalance, and the students represented only
one discipline. Therefore, on the basis of these results, no generalizations
can be made. There are also many variables which are involved when
exploring the accuracy of self-assessment. These include factors such as
study level, student characteristics, learning task, assessment criteria and
procedure, the learning environment, and students’ practice in selfassessment as reflected in cultural self-images regarding self-esteem in
general. Given the scope of the research presented here, these factors were
not taken into account and thus form part of the limitations of this study.
In future it will be important to compare the results of self-, peer- and
teacher-assessment in larger multidisciplinary samples and also to further
explore these other variables. However, it is promising that the results of
the self-, peer- and teacher-assessment were very similar and that students’
experiences of self- and peer-assessment were positive. The active role of
students in the assessment process – a key requirement of the assessment
culture – cannot be accomplished if students lack motivation or positive
attitudes towards self- and peer-assessment.
References
B I G G S , J.
(1999) Teaching for Quality Learning at University. Buckingham: Open University
Press.
B I R E N B AU M , M .
& D O C H Y, F. (1996) Alternatives in Assessment of Achievements, Learning
Processes and Prior Knowledge. Boston, MA: Kluwer Academic Publishers.
B O U D , D . (1992) ‘The Use of Self-assessment Schedules in Negotiated Learning’,
Studies in Higher Education 17(2): 185–201.
B O U D , D . (1995) Enhancing Learning Through Self-Assessment. London: Kogan Page.
B ROW N , G . , B U L L , J . & P E N D L E B U RY, M . (1997) Assessing Students’ Learning in Higher
Education. London: Routledge.
B U C H Y, M . & Q U I N L A N , K . (2000) ‘Adapting the Scoring Matrix: A Case Study of
Adapting Disciplinary Tools for Learning Centred Evaluation’, Assessment and Evaluation
in Higher Education 25(1): 81–91.
DAV I D , T. , PAT E L , L . , B U R D E T T, K . & R A N G AC H A R I , P. (1999) Problem-based Learning
in Medicine. Worcester: Royal Society of Medicine Press.
D O C H Y, F . , S E G E R S , M . & S L U I J S M A N S , D . (1999) ‘The Use of Self-, Peer- and Coassessment in Higher Education: A Review’, Studies in Higher Education 24(3): 331–51.
F A L C H I KOV, N . & B O U D , D . (1989) ‘Student Self-assessment in Higher Education: A
Meta-analysis’, Review of Educational Research 59(3): 395–430.
G A L E , K . , M A RT I N , K . & M C Q U E E N , G . (2002) ‘Triadic Assessment’, Assessment and
Evaluation in Higher Education 27(6): 557–67.
G O L D F I N C H , J . (1994) ‘Further Developments in Peer Assessment of Group
Projects’, Assessment and Evaluation in Higher Education 19(1): 29–35.
60
Downloaded from http://alh.sagepub.com at ROOSEVELT UNIV LIBRARY on June 28, 2008
© 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
L I N D B L O M - Y L Ä N N E E T A L . : E S S AY A S S E S S M E N T
G U L I K E R S , J . , B A S T I E N S , T.
& K I R S C H N E R , P. (2004) ‘A Five-Dimensional
Framework for Authentic Assessment’, Educational Technology Research and Development
52(3): 67–86.
H A N R A H A N , S . & I S A AC S , G . (2001) ‘Assessing Self- and Peer-assessment: The
Students’ Views’, Higher Education Research & Development 20(1): 53–70.
L E J K , M . & W Y V I L L , M . (2001) ‘The Effect of the Inclusion of Self-assessment with
Peer-assessment of Contributions to a Group Project: A Quantitative Study of Secret
and Agreed Assessments’, Assessment and Evaluation in Higher Education 26(6): 551–61.
L I N D B L O M - Y L Ä N N E , S . & L O N K A , K . (2001) ‘Students’ Perceptions of Assessment
Practices in a Traditional Medical Curriculum’, Advances in Health Sciences Education
6(2): 121–40.
M AC K E N Z I E , L . (2000) ‘Occupational Therapy Students as Peer Assessors in Viva
Examinations’, Assessment and Evaluation in Higher Education 25(2): 135–47.
M AC L E L L A N , E . (2004) ‘How Convincing is Alternative Assessment for Use in Higher
Education?’ Assessment and Evaluation in Higher Education 29(3): 311–21.
M AG I N , D . & H E L M O R E , P. (2001) ‘Peer- and Teacher-assessment of Oral
Presentation Skills: How Reliable Are They?’ Studies in Higher Education 26(3): 287–98.
M I L L E R , P. (2003) ‘The Effect of Scoring Criteria Specificity on Peer and
Self-assessment’, Assessment and Evaluation in Higher Education 28(4): 383–94.
O R S M O N D , P. , M E R RY, S . & R E I L I N G , K . (1996) ‘The Importance of Marking
Criteria in Peer Assessment’, Assessment and Evaluation in Higher Education 21(3): 239–49.
O R S M O N D , P. , M E R RY, S . & R E I L I N G , K . (1997) ‘A Study in Self-assessment: Tutor
and Students’ Perceptions of Performance Criteria’, Assessment and Evaluation in Higher
Education 22(4): 357–69.
O R S M O N D , P. , M E R RY, S . & R E I L I N G , K . (2000) ‘The Use of Student Derived
Marking Criteria in Peer and Self Assessment’, Assessment and Evaluation in Higher
Education 25(1): 23–38.
O R S M O N D , P. , M E R RY, S . & R E I L I N G , K . (2002) ‘The Use of Exemplars and
Formative Feedback when Using Student Derived Marking Criteria in Peer- and
Self-assessment’, Assessment and Evaluation in Higher Education 27(4): 309–23.
P O P E , N . (2001) An Examination of the Use of Peer Rating for Formative Assessment
in the Context of the Theory of Consumption Values’, Assessment and Evaluation in
Higher Education 26(3): 235–46.
RU S T, C . , P R I C E , M . & O ’ D O N OVA N , B . (2003) ‘Improving Students’ Learning by
Developing their Understanding of Assessment Criteria and Process’, Assessment and
Evaluation in Higher Education 28(2): 147–64.
S AV I N - B A D E N , M . (2000) Problem-based Learning in Higher Education: Untold Stories.
Buckingham: The Society for Resesarch into Higher Education & Open University
Press.
S E G E R S , M . & D O C H Y, F . (2001) ‘New Assessment Forms in Problem-based
Learning: The Value Added of the Students’ Perspective’, Studies in Higher Education
26(3): 327–43.
S E G E R S , M . , D O C H Y, F . & C A S C A L L A R , E . (2003) ‘The Era of Assessment
Engineering: Changing Perspectives on Teaching and Learning and the Role of
New Modes of Assessment’, in M. Segers, F. Dochy and E. Cascallar (eds) Optimising
New Modes of Assessment: In Search of Qualities and Standards, pp. 1–12. Dordrecht, The
Netherlands: Kluwer Academic Publishers.
S E G E R S , M . , D O C H Y, F . , VA N D E N B O S S C H E , P. & T E M P E L A A R , D . (2001) ‘The
Quality of Co-assessment in a Problem-based Curriculum’. Paper presented at the
61
Downloaded from http://alh.sagepub.com at ROOSEVELT UNIV LIBRARY on June 28, 2008
© 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N
7(1)
9th Conference of European Association for Research on Learning and Instruction,
28 August–1 September at Fribourg, Switzerland.
S L U I J S M A N S , D . , B R A N D - G RU W E L , S . & M E R R I Ë N B O E R , J . (2002) ‘Peer Assessment
Training in Teacher Education: Effects on Performance and Perceptions’, Assessment
and Evaluation in Higher Education 27(5): 443–54.
TA R A S , M . (2001) ‘The Use of Tutor Feedback and Student Self-assessment in
Summative Assessment Tasks: Towards Transparency for Students and for Tutors’,
Assessment and Evaluation in Higher Education 26(5): 605–14.
T O P P I N G , K . (2003) ‘Self- and Peer-assessment in School and University: Reliability,
Validity and Utility’, in M. Segers, F. Dochy and E. Cascallar (eds) Optimising New
Modes of Assessment: In Search of Qualities and Standards, pp. 55–87. Dordrecht, The
Netherlands: Kluwer Academic Publishers.
T O P P I N G , K . , S M I T H , E . , S WA N S O N , I . & E L L I O T, A . (2000) ‘Formative Peer
Assessment of Academic Writing Between Postgraduate Students’, Assessment and
Evaluation in Higher Education 25(2): 149–69.
Z O L L E R , U. & B E N - C H A I M , D . (1997) ‘Student Self-assessment in HOCS Science
Examinations: Is it Compatible with that of Teachers?’ Paper presented at the 7th
EARLI conference, Athens, Greece, 26–30 August.
Biographical notes
S A R I L I N D B L O M - Y L Ä N N E is Professor of Higher Education and Director of the Centre
for Research and Development of Higher Education at the University of Helsinki. Her
research interests include teaching, learning and assessment in higher education as
well as different learning environments.
Address: Faculty of Behavioural Sciences, P.O. Box 9, 00014 University of Helsinki,
Finland. [email: sari.lindblom-ylanne@helsinki.fi]
H E I K K I P I H L A J A M Ä K I is Research Fellow at the Academy of Finland. His research
focuses on the comparative history of Finnish, European and American law.
Address: Faculty of Law, P.O. Box 4, 00014 University of Helsinki, Finland.
T O O M A S KO T K A S is Assistant of General Jurisprudential Studies at the Faculty of Law
at the University of Helsinki. His research interests lie within the field of legal history
and legal philosophy.
Address: Helsinki Collegium for Advanced Studies, P.O. Box 4, 00014 University of
Helsinki, Finland.
62
Downloaded from http://alh.sagepub.com at ROOSEVELT UNIV LIBRARY on June 28, 2008
© 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.