Authentic self-and peer-assessment for learning

advertisement
This article was downloaded by: [Deakin University Library]
On: 08 January 2013, At: 19:43
Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Assessment & Evaluation in Higher
Education
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/caeh20
Improving engagement: the use of
‘Authentic self-and peer-assessment
for learning’ to enhance the student
learning experience
Sean Kearney
a
a
School of Education, University of Notre Dame Australia, Sydney,
Australia
Version of record first published: 24 Dec 2012.
To cite this article: Sean Kearney (2012): Improving engagement: the use of ‘Authentic self-and
peer-assessment for learning’ to enhance the student learning experience, Assessment & Evaluation
in Higher Education, DOI:10.1080/02602938.2012.751963
To link to this article: http://dx.doi.org/10.1080/02602938.2012.751963
PLEASE SCROLL DOWN FOR ARTICLE
Full terms and conditions of use: http://www.tandfonline.com/page/terms-andconditions
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representation
that the contents will be complete or accurate or up to date. The accuracy of any
instructions, formulae, and drug doses should be independently verified with primary
sources. The publisher shall not be liable for any loss, actions, claims, proceedings,
demand, or costs or damages whatsoever or howsoever caused arising directly or
indirectly in connection with or arising out of the use of this material.
Assessment & Evaluation in Higher Education, 2012
http://dx.doi.org/10.1080/02602938.2012.751963
Improving engagement: the use of ‘Authentic self- and peerassessment for learning’ to enhance the student learning
experience
Sean Kearney*
Downloaded by [Deakin University Library] at 19:43 08 January 2013
School of Education, University of Notre Dame Australia, Sydney, Australia
Innovative assessment practices have the potential to shift the way universities
function. By focusing on well-designed assessment tasks, where students are
expected to work collegially and are actively involved in self- and peer-assessment, the opportunity to engage students in the assessment process is realised.
This article contends that students are significantly and detrimentally disengaged
from the assessment process as a result of traditional assessments that do not
address key issues of learning. Notable issues that arose from observations and
questioning of students indicated that vast proportions of students were not
proofreading their own work were not collaborating on tasks; had not been
involved in the development of assessment tasks; and that students had insufficient skills in relation to their ability to evaluate their own efforts. These facts
led the author to conceptualise new models of assessment focusing on authentic
learning and the authentic assessment of that learning through self- and
peer-assessment. Authentic assessment for sustainable learning (AASL) and
Authentic self- and peer-assessment for learning (ASPAL) were trialled with
approximately 300 undergraduate education students at the University of Notre
Dame Australia. This article explains the conceptual development of the models
and provides justification for their implementation.
Keywords: self-assessment; peer-assessment; sustainable assessment; engagement
Introduction
Assessment in higher education in Australia has undergone and is currently
experiencing a period where traditional forms of assessment are being questioned
(Boud and Associates 2010). There is a prevalent notion that although the foundations for reform in assessment are present (Boud and Associates 2010; James,
McInnis, and Devlin 2002), time-honoured forms of assessment are too entrenched
in long-established university courses to allow for genuine change (Shepherd 2000).
There also seems to be a perpetuation of the dichotomy between traditional
approaches of assessment and learning and the ideals of critical thinking, autonomy
and thoughtfulness in education (Brint, Cantwell, and Hanneman 2008; Dochy,
Segers, and Sluijsmans 1999).
Assessment is a central component in the learning process, and most students
focus more on assessment than any other aspect of their courses (Boud 1990;
*Email: sean.kearney@nd.edu.au
Ó 2012 Taylor & Francis
Downloaded by [Deakin University Library] at 19:43 08 January 2013
2
S. Kearney
James, McInnis, and Devlin 2002); consequently, assessment has the power to drive
student learning. For most students, according to James, McInnis, and Devlin
(2002), ‘assessment requirements literally define the curriculum’ (7); therefore,
educators can use this element of student perception to maximise the learning
potential that it harbours. Educators have the capacity to engage students through
what they value most in their education by ensuring that assessments concentrate
on the essential skills of the twenty-first century workforce, such as critical thinking
and autonomous learning, and inspire innovation and creativity (McGraw-Hill 2008;
Singh and Terry 2008; Tait-McCutcheon and Sherley 2006).
In early 2010, various forms of innovative assessment were trialled with undergraduate education students at the University of Notre Dame Australia. From those
trials and a review of the literature, a new model of assessment was developed to
try and engage students in their studies: Authentic assessment for sustainable learning (AASL). Sustainable learning, sometimes called life-long learning, is an essential graduate attribute in the twenty-first century (Singh and Terry 2008) as it
emphasises the ability to adapt to the changing nature of the workforce, regardless
of industry. To make assessments more engaging and collaborative, which are both
tenets of sustainable learning, the concept of Authentic self- and peer-assessment
for learning (ASPAL) was developed as a delivery model for the implementation of
AASL. In this article, some of the key ideas surrounding assessment in higher education in Australia are examined; an overview of burgeoning reforms in the tertiary
sector is provided; and, the conceptual and theoretical development of the models is
justified. Through a review of the relevant literature, we were able to develop our
key questions that formed the basis of the research on which this article is based.
• Why do we assess?
• For whose benefit do we assess?
• Are these achieved through current practices?
Literature review
Higher education in Australia
The position of the Australian Government with regards to higher education aligns
with the current international trends that focus on authentic and sustainable assessment that have relevance beyond the classroom (Boud and Falkichov 2005; Segers,
Dochy, and Cascallar 2003), including:
Self-fulfilment, personal development and the pursuit of knowledge as an end in itself;
the provision of skills of critical analysis and independent thought to support full participation in a civil society; the preparation of leaders for diverse, global environments;
and support for a highly productive and professional labour force should be key features of Australian higher education. (Australian Government 2009, 7)
This contention, however, is dichotomous: on the one hand, the emphasis of the
importance of knowledge ‘as an end in itself’, critical thinking and independent
thought; however, it also emphasises that universities should instil the skills necessary for students to become members of a highly productive and professional labour
force. While the two ideals are not mutually exclusive, they present a distinctive
challenge to the higher education sector.
Downloaded by [Deakin University Library] at 19:43 08 January 2013
Assessment & Evaluation in Higher Education
3
There are currently 37 public universities, two private universities and
approximately 150 other providers of higher education in Australia. In their reform
of higher education, the government seeks to increase the number of people aged
25–34 holding a bachelor-level qualification to 40% by 2020; this is an 11%
increase over current attainment levels (Bradley et al. 2008). If this intended
increase transpires, the higher education sector in Australia will see an additional
217,000 graduates by 2025; this will place an enormous amount of pressure on
institutions that already struggle with retention rates in the present environment.
Currently, the student attrition rate in the tertiary sector in Australia is approximately 28% (Australian Government 2009). To keep Australia competitive in the
modern global market and meet government targets, the universities and other
higher education institutions will need to address the high level of student attrition.
The government recognizes this challenge and contends:
Although student satisfaction levels remain high, Australia has fallen behind its major
competitor countries on key teaching and student experience indicators and drop-out
rates remain high at 28 per cent in 2005. Similarly, the dramatic rise in student-to-staff
ratios – from about 15:1 in 1996 to over 20:1 in 2006 – is probably a significant contributor to the relatively low levels of student engagement. (Australian Government
2009, 14)
While the government argues that a ‘highly productive and professional labour
force should be key features of Australian higher education’ (Australian Government 2009, 7), a fast-changing, technologically driven twenty-first century labour
force is a by-product of an education that fosters the growth of the individual learner by encouraging the development of skills that are crucial to sustainable and
autonomous learning. There have been numerous appeals for the reform of assessment practices in higher education from experts in the field (Boud and Associates
2010) and the Commonwealth has called for reform in the higher education sector
(Bradley et al. 2008). AASL and ASPAL proposed in this article are an answer to
those calls.
Sustainable assessment
The literature is clear that assessment is a driving force of learning and (Boud
1990; Lamprianou and Athanasou 2009) there is simply nothing else in the learning
continuum that garners as much student attention than what the student will be
assessed on (Lamprianou and Athanasou 2009). Recognition of the importance of
assessment for learning (formative assessment) and assessment of learning (summative assessment) has been central in research concerning recommendations for
reform in higher education in recent years (Boud and Associates 2010; Elwood and
Klenowski 2002; James, McInnis, and Devlin 2002; Lamprianou and Athanasou
2009).
According to Bloxham and Boyd (2007), there are four purposes of assessment:
certification, student learning, quality assurance and lifelong learning capacity.
Certification and quality assurance can be seen as summative assessment practices,
whereas student learning and lifelong learning capacity are formative. Sustainable
assessment recognises the need for both formative and summative practices, as
assessment is now considered to be an integral part of the teaching and learning
cycle (Boud and Associates 2010; Elwood and Klenowski 2002; Graff 2009). In
Downloaded by [Deakin University Library] at 19:43 08 January 2013
4
S. Kearney
their research, Lamprianou and Athanasou (2009) make the contention that according to student diaries, less than 10% of students’ time is spent on non-assessable
activities. Curriculum, therefore, should be designed to maximise students’ potential
on assessments that encourage the skills necessary for success at university and in
their lives.
As far back as 1999, the literature was promoting the development of professional skills such as problem solving, critical thinking, creativity, autonomy in
learning and authenticity in learning through innovative forms of assessment
(Dochy, Segers, and Sluijsmans 1999). If this ‘new era’ started over a decade ago,
we should be thoroughly entrenched in practices that promote authentic and sustainable learning and appropriate methods to assess that learning. Instead, according to
Klenowski, there is only overall dissatisfaction with educational attainment (2006).
One way to address the quality of education is to propose new methods of
assessment that are authentic in nature and address the future needs of students.
Boud argues that sustainable assessment draws attention to the ‘knowledge, skills
and predispositions that underpin lifelong learning activities’ (2000, 151). The idea
of authentic, sustainable assessment is one that not only can meet the needs and
skills required for success in the twenty-first century, but also has the ability to
engage interest and enhance student learning (Boud 2000; Vu and Dall’Alba 2008).
Self-assessment
In traditional forms of assessment, control rests with the lecturer; self- and peerassessment instead focuses upon the learning experience. Boud cautions us about
this explaining that summative assessment, in its current condition, ‘provides a
mechanism of control exercised by those who are guardians of particular kinds of
knowledge over those who are controlled by assessment’ (2000, 155). The notion
of controlling learning or managing learning is antithetical to the ways that learning
is understood (Siemens 2004). Control needs to be relinquished by the ‘guardians
of knowledge’ to allow authentic learning to occur. Situations whereby knowledge
is attained through collaboration and mutual understanding of expectations and outcomes is a more desirable goal and can be accomplished in many ways, two of
which are through self- and peer-assessment (Shepherd 2000).
Self-assessment is considered to be a valuable learning activity (Andrade and
Du 2007; Falchikov and Boud 1989; Hanrahan and Isaacs 2001) that encourages a
deep approach to learning (Boud and McDonald 2003; Ozogul and Sullivan 2007;
Rivers 2001). Boud defines self-assessment as involving students in ‘identifying
standards and/or criteria to apply to their work and making judgements about the
extent to which they met these criteria and standards’ (2000, 5). Andrade and Valtcheva (2009), on the other hand, differentiate self-assessment and self-evaluation,
the first as a formative process involving reflection and revision, and the latter as
students grading their own work. While Andrade focuses more on self-assessment
over self-evaluation because of the tendency for students to inflate grades, other
studies have found that high correlations exist between self- and teacher-ratings
(Falchikov and Boud 1989; Hafner and Hafner 2003).
As there is no standard definition of self-assessment in the literature, what is
referred to in this article and in the conceptual development of the assessment
models proposed is both self-assessment and self-evaluation, but is referred to as
self-assessment. Self-assessment, as utilised in this model, is criterion referenced
Assessment & Evaluation in Higher Education
5
Downloaded by [Deakin University Library] at 19:43 08 January 2013
(Stiggins 2001; Wiggins 1998) and those criteria are co-defined by the lecturer and
students (Dochy and McDowell 1997; Stallings and Tascione 1996).
In promoting a model that is both summative and formative, in theory, it can
assist students in establishing a base for lifelong learning and in developing the
potential to engage as intrinsically motivated learners in reflective practice. It also
encourages autonomy and critical thinking by developing capacity, influence and
metacognition (Rivers 2001; Tait-McCutcheon 2006). Self-assessment has the
capacity to promote autonomous learning, which is why it is a valuable aspect of
sustainable learning.
Peer-assessment
Peer-learning and peer-assessment have also taken a central position in the literature
regarding assessment and assessment reform (Boud, Cohen, and Sampson 1999;
Thomas, Martin, and Pleasants 2011). Peer-assessment is defined by Topping
(2007) as, ‘an arrangement in which individuals consider the amount, level, value,
worth, quality, or success of the products or outcomes of learning of peers of similar status (250)’. While Boud, Cohen and Sampson (1999) caution against students
making judgements about each other’s performance in working groups, they also
acknowledge that ‘the input of peers into assessment decisions is valuable and ways
of using data of this kind must be found’ (421).
In seminal research conducted by Falchikov (1986), students who participated in
peer-assessment felt that it was challenging, but also helped to develop their critical
thinking skills. This is further supported by research conducted by Bloxham and
West (2004), who found that peer-assessment not only supported students’ own
learning but also improved their understanding of the process of assessment. The
pitfalls of peer-assessment with regard to concerns about efficacy, accuracy and size
of classes have also been canvassed (see Boud, Cohen, and Sampson 1999; Ng and
Earl 2008; Taylor 2008); however, in their study, Dochy, Segers, and Sluijsmans
found ‘that the use of the self-, peer- and co-assessments is effective’, and their
results with regard to accuracy indicated that, ‘self- and peer-assessment can be
used for summative purposes as a part of the co-assessment’ (1999, 344). Therefore,
the use of self- and peer-assessment as a summative aspect of a formative process
may have the capacity to benefit students’ learning without sacrificing the academic
integrity of the course. This is confirmed by Topping (1998), who found that a
majority of studies suggest that peer-assessment is of and only a minority found the
reliability and validity of peer-assessment unacceptably low.
The conceptual foundation of the AASL and ASPAL models of assessment
incorporates the positive aspects of both self- and peer-assessment while adjusting
and accounting for the concerns expressed in the literature with regard to peerassessment. The ASPAL model provides a means for using self-assessment to
enhance metacognition (Ozogul and Sullivan 2007; Rivers 2001) and peer-assessment as a valuable tool for pre-service teachers to learn how to assess and judge
their own work against that of other students (Topping 1998). Raban and Litchfield
(2007) assert that the use of self- and peer-assessment ‘creates a formative, diagnostic and summative assessment environment in which the students can learn the
skills of peer-assessing their fellow students using quantitative rating and qualitative
comments’ (35). The contention of Raban and Litchfield is particularly relevant in
6
S. Kearney
Downloaded by [Deakin University Library] at 19:43 08 January 2013
the context of pre-service teachers, where the capacity to assess accurately is an
important professional skill needed in their career.
The conceptual development of the AASL and ASPAL models
In the development of these assessment models inspiration was drawn from the
Australian Teaching and Learning Council (ALTC) and the paper: Assessment
2020: Seven propositions for assessment reform in higher education (Boud and
Associates 2010), and the ideas of situated learning, legitimate peripheral participation (Lave and Wenger 1991) and communities of practice (Wenger 1998). Through
the literature review, informal surveys of undergraduate education students at the
University of Notre Dame Australia and the perception of limited levels of engagement among students, a model of assessment was sought that could have the potential to shift the ways in which students regard assessment and transform the manner
in which assessment occurs.
The initial informal surveys of approximately 200 undergraduate education
students were conducted by their lecturer during the class and it was found:
• More than 80% of students were not reading their own assessments before
submitting them.
• More than 85% of students had not seen another student’s assessment task.
Almost 100% of students were working in isolation from their peers.
• More than 90% of students had not been given the opportunity to be actively
involved in creating assessments or the marking criteria for those assessments.
These were discouraging responses, which led to brainstorming sessions with
students regarding better forms of assessment. These sessions led to trialling new
and creative ways of engaging students through various forms of assessment. These
trials resulted in the following contentions, which are also supported by the literature:
• Assessment should reflect what is learnt and considered important throughout
the course (Donald and Denison 2001).
• Students want feedback as soon as possible after an assignment is submitted.
If too long a period of time passes without feedback, the feedback is not
valued (Black and William 1998).
• Anonymity in peer-assessment is essential (Zariski 1996).
• Students do not like relying on peers for a mark – they want their marks to
be moderated by an expert (Boud, Cohen, and Sampson 1999).
• Students need practice in assessing and in the language of assessment before
they take part in the process of peer- and/or self-assessment (De Grez, Valcke,
and Roozen 2012).
• Students want to be part of the process of creating assessments, developing
criteria and marking tasks, as long as they have been trained to do so (De
Grez, Valcke, and Roozen 2012; Topping 1998).
A significant attribute of the ASPAL model is its integration of both formative
and summative assessment features. The purpose of assessment is multifaceted;
therefore, assessment models that can encompass the various aspects and purposes
Assessment & Evaluation in Higher Education
7
Downloaded by [Deakin University Library] at 19:43 08 January 2013
of assessment may be useful in reforming the current practice. The use of self- and
peer-assessment as a summative judgement is the culminating aspect of the ASPAL
model; however, the procedure of assessment is an ongoing process that occurs over
weeks, which allows for the entire process to be formative. As a result, a conceptual framework for sustainable assessment was sought to establish justification for
the use of the two models proposed. The models incorporate the best aspects of
both formative and summative assessment by informing student learning throughout
the assessment process and counting towards their final mark.
Authentic assessment for sustainable learning (AASL)
In the AASL model, self-assessment, peer-assessment and lecturer assessment are
combined to produce a summative grade for the student. In this model (see Figure 1):
• Lecturer assessment accounts for 40% of the overall mark allowing the
lecturer’s mark to act as a moderator for the self- and peer-assessment marks.
• Two peers collaboratively mark another student’s anonymous task. While the
peers must collaborate during the process, with verbal communication being
an integral aspect, they do not have to agree on the mark given. Each peers’
mark will account for 15%, making the total peer-assessment account for 30%
of the overall mark.
• Lastly, the student will mark their own task against the criteria, which they
had a part in creating, with the perspective of having seen two of their peers’
assignments. The self-assessment accounts for the last 30% of the overall
mark, thus giving them a significant influence on his/her own marks. This
should empower students to critically reflect on their work in relation to their
peers’.
Authentic self- and peer-assessment for learning (ASPAL)
The AASL is the model for assessment and ASPAL constitutes the delivery method
for the AASL model (see Figure 2). In our research, the process was compartmenta-
Figure 1. Authentic assessment for sustainable learning (AASL).
Downloaded by [Deakin University Library] at 19:43 08 January 2013
8
S. Kearney
Figure 2. Authentic self- and peer-assessment for learning (ASPAL).
lised into various stages and differentiated for individual assessment tasks and
group assessment tasks: the ASPAL individual assessment model is illustrated in
Figure 3 and the group assessment model in Figure 4. The premise of the model is
similar to AASL but further delineates the process. For the purposes of the research
regarding the effective implementation of the models, students were surveyed at the
outset of their unit1 concerning their levels of engagement with their courses, their
levels of satisfaction with regard to learning and assessment and their preliminary
feelings about participating in the ASPAL process. The ASPAL process happened
over the course of approximately five weeks, which allowed the process to be formative, while still culminating in a summative grade. From the time the marking
criteria were created and the pilot marking occurred, students were able to engage
with each other and seek feedback on their works in progress. Keeping in mind that
this task is authentic, this was an imperative aspect of the process.
The next stage after the survey was to collaboratively develop marking criteria
that addressed the predetermined outcomes of the unit in line with the backward
mapping principles of Understanding by Design (Wiggins and McTighe 2005). The
next stage in the ASPAL process is pilot marking. Pilot marking is the practice of
marking sample pieces of work, similar to those which the students are required to
complete, to understand how to mark and to gain consensus amongst all students
Downloaded by [Deakin University Library] at 19:43 08 January 2013
Assessment & Evaluation in Higher Education
9
Figure 3. ASPAL individual assessment model procedures.
Figure 4. ASPAL group assessment model procedures.
and the lecturer with regards to the level of work expected and required to gain a
different grade.2
10
S. Kearney
Downloaded by [Deakin University Library] at 19:43 08 January 2013
By practising the marking process ahead of time, it was expected that the
students would be able to grade with more confidence than if it were their first
time. After the pilot marking session, the students are ready to participate in peer
and then self-assessment, which would occur two weeks after the pilot marking. In
the weeks in between, students were encouraged to conduct self- and peer-assessment, informally, prior to the official marking taking place. Directly following the
peer- and self-assessment papers are returned, students receive their feedback from
their peers and the lecturer and then are debriefed by means of a reflection session
to discuss the outcomes of the process. Students were then surveyed again at the
end of the semester where they were asked to reflect on the positive and negative
aspects of the AASL and ASPAL processes.
ASPAL individual assessment model
The rationale for this method of assessment is that the student becomes a vital part
of the assessment process. Not only are they included in creating the marking criteria for their work, but they are actively involved in the marking processes and providing feedback to their peers. The process is as follows: students are randomly
paired and given two anonymous assignments. They are given 20 min to complete
the process, during which they are each to read the assignment and give it a tentative mark of between 1 and 5 (low to high). They then discuss and defend their
mark with the other member of their pair, and award their final mark. While collaboration is necessary, agreement is not. The same process is repeated for the second
assignment. At the end of the peer-marking session, the students spend 20 min
marking their own assignment, alone, against the marking criteria. This reflective
self-assessment of their own work against the criteria is imperative for the process,
since students are evaluating themselves against the criteria but also in relation to
their peers’ work, which they have just assessed. The pilot marking conducted earlier in the process; the marking criteria that was collaboratively developed; and
reading and marking their peers’ work culminates in a comprehensive basis of comparison for their own performance.
The APSAL model requires the assessment process to occur as soon as possible
after students’ work has been submitted. The literature is clear about the necessity
for such expeditious feedback (Hattie 2003; James, McInnis, and Devlin 2002;
Zariski 1996), and this was confirmed by student views. Ideally, this process would
take place within one week after the assignments were handed in, or during the next
available lecture or tutorial. The lecturer’s mark, worth 10% more than the selfassessment and the peer-assessment individually, works as a moderator for students’
marks. However, as it is less than the cumulative total of the self- and peer-marks,
control of the overall mark is in the hands of the students, not the lecturer. This
level of engagement with the critical success factors of the unit and the course is
the basis of the implementation of this model. Collaborative processes both among
students and between the students and the lecturer are deemed vital for sustainable
learning practices.
ASPAL group assessment model
The rationale of this model is the same as that of the individual model, with one
exception: the addition of the assessment of individuals’ performances within the
Downloaded by [Deakin University Library] at 19:43 08 January 2013
Assessment & Evaluation in Higher Education
11
group accounts for 15% of the individual’s mark. In this model, the peer-assessment
mark and the self-assessment mark account for 25% each, rather than the 30% in
the individual model, and the lecturer’s mark accounts for 35%, rather than 40%.
Group- and self-assessment of individual performance is a necessary factor in ensuring that groups work collaboratively and that all members fully participate in the
group’s final product.
In the ASPAL group model, each group peer-marks other groups’ work, the
rationale for this is twofold: first, teamwork and collaboration are essential criteria
for success in a group task; secondly, marking as a group against the criteria should
give the group a more accurate basis for awarding their own self-assessed group
mark during the next phase of the process. Although the group is awarded one
mark for their task, each group member may receive a slightly different mark based
on their individual contribution to the group work. This is completed through a
group and self-assessment of individual performance, which is unique to the group
model, and is a twofold process. Firstly, each group member prepares a short report
detailing his or her contribution to the final assignment. This is known as an Individual contribution report (ICR) (Clark, Davies, and Skeers 2005). Each group
member reads the ICR before marking that individual on his or her performance.
Individual performance is marked on a simple 1–5 scale (low to high). In addition
to the numeric mark, each member of the group also indicates their agreement or
disagreement with the each group member’s ICR and makes a qualitative comment
supporting the score awarded.
The rationale for assigning group work and awarding a group-assessment mark
should be based on those authentic skills that students will need in life outside the
classroom: communication, cooperation, collaboration, sacrifice and teamwork. As
Stembourg remarks, ‘we should assess what students need to become: active and
engaged citizens of the world in which they will live – in a sense, what it takes to
be “expert” citizens’ (2008, 20). It is vital that the rationale for assigning any
assessment task be authentic and valid: if a group task is assigned, it should be to
benefit the students, not to satisfy institutional needs. In this model, students are
assigned, or choose, their groups according to the particular assessment requirements. All members of the group will get the same mark for the assignment, albeit
moderated by the individual group mark given by the members of the group.
Discussion
The evolution from pre-service teacher to integrated member of the professional
community of teachers can be better accomplished by preparing those pre-service
teachers through the authentic undertaking of the skills and proficiencies they will
need when they enter the profession. Through the introduction of these models of
assessment, it is anticipated that the learning community in which these pre-service
teachers participate in as undergraduates will help to facilitate the implementation
of learning communities in their classrooms. The premise of developing universities
as learning communities is well established in the literature (see Caroll 2005; Fulton
and Yoon 2005; O’Malley 2010); consequently, a model of authentic learning that
is focused on helping undergraduate students become full members of the community in which they are trained for seems consistent with the goals of pre-service teacher education. Legitimate peripheral participation in the professional activities of
Downloaded by [Deakin University Library] at 19:43 08 January 2013
12
S. Kearney
teachers through a situated learning process that culminates in a community of practice (Lave 1991) is the foundation to the AASL and ASPAL models.
The development, implementation and evaluation of pedagogical practices which
engage students is a never-ending process that must be revisited in order to ensure
that students are receiving a useful, authentic and sustainable education that has
lasting value. The focus in the development of these models has been to engage
students and the fundamental belief that education is about building relationships
with students. Engagement however, is an often overused and misunderstood term
because it can mean many different things depending on the context in which it is
used. In this article, engagement refers to and used in line with motivational perspectives as put forth by Skinner, Kinderman and Furrer, who suggest that ‘engagement refers to the quality of a student’s connection or involvement with the
endeavour of schooling and hence with the people, activities, goals, values and
place that compose it’ (2009, 494). At the heart of this conceptualisation of engagement is that it encourages and produces a dedication to deeper learning and understanding. The ASPAL attempts to engage students into this deeper learning and
understanding through the development of learning communities in which pre-service teachers are invited into the world in which we, as their instructors, operate,
and in which they will work upon completion of their degree. As educators, we are
inducting our students into an understanding of the professional skills and attributes
they will require in their careers, essentially a ‘community of practice’ (Wenger
1998).
The conceptual development of these two models occurred over the length of a
semester and a trial of these models was undertaken with approximately 300 undergraduate primary education students in their second year of a four-year degree.
There are considerable implications in the implementation of these models that
could make it unsuitable for many university courses. The size of courses and the
absence of tutorials in certain courses may not suit these models of assessment.
Although developed for pre-service teacher education, it is believed that on a
broader level, these models of assessment can be adapted to any course that seeks
to engage students through assessment in which the models are suitable; however,
the bias towards education students and the development of professional skills that
are required for the teaching profession are acknowledged. A fundamental belief
that engagement is the key to reform in the tertiary sector, not only to ameliorate
attrition rates but also to enhance the development of the critical skills necessary
for students to prosper in a technology driven, global world underpins the conceptual framework of these models of assessment. The development of skills such as:
creativity, innovation, critical thinking and autonomous learning are recognised as
universal graduate attributes needed in an unpredictable global market (McGrawHill 2008; Singh and Terry 2008; Tait-McCutcheon 2006).
The disappointing level of engagement of students in higher education is a problem that may be able to be overcome by abandoning the traditional practices that
ignore student needs. Students need to be engaged through a means in which they
have an investment, one that they understand and respect; and assessment has that
capacity (Lamprianou and Athanasou 2009). However, when the value of assessment is being discussed as the mechanism that drives student learning, it must be
realised that only summative assessment has that capacity. Summative assessment
can drive learning because students have a vested interest in passing their course
and, therefore, value the importance of this kind of assessment; that is why this
Downloaded by [Deakin University Library] at 19:43 08 January 2013
Assessment & Evaluation in Higher Education
13
model culminates in summative mark, rather than being purely formative. Formative
assessment has its place and is embedded in the model; however, without a grade
or a mark attached to the process, one would have to question whether it would
have any real impact or power, or have the capacity to shift the control mechanisms
that are entrenched in traditional assessment practices (Boud 2000; Siemens 2004).
Students value summative assessment, because summative assessment affects the
outcomes of their studies. Students who value learning and are engaged in their
studies would see the benefit of purely formative assessment; however, these models seek to engage all students in the process of assessment by ensuring that they
are affected by the results of their engagement, therefore, increasing their learning
potential.
The value students place on assessment, if harnessed creatively, can positively
affect learning; however, many forms of assessment which the students are exposed
to are disengaging them from the critical success factors of their courses. By continuing to perpetuate inauthenticity in learning and assessment through engrained, traditional pedagogical methods we may be actively disengaging students in their
studies.
Conclusions
The propositions put forth in this article are in line with current empirical research
that contends that traditional assessment practices in the tertiary sector are not meeting the needs of twenty-first century learners (Falchikov and Thomas 2008; James,
McInnis, and Devlin 2002; Lamprianou and Athanasou 2009). The attempts and
trials to implement new innovative forms of assessment into a pre-service teacher
education course support these contentions and echoes the broader conclusions
made by Boud and Associates who argued:
Universities face substantial change in a rapidly evolving global context. The
challenges of meeting new expectations about academic standards in the next decade
and beyond mean that assessment will need to be rethought and renewed. (2010, 1)
In specific relation to pre-service teacher education, it is imperative that we cultivate
the educational and pedagogical environments that our students are accustomed to
and will be working within, when they enter the profession.
In reconceptualising assessment practices, it became apparent that in the pursuit
of the objectives of authentic and sustainable assessment, the paramount focus must
be on enhancing students’ capacity for learning and engagement with the curriculum. By facilitating a learning community where there is a shared understanding of
the assessment process and the criteria for success, students were invited to be a
part of the process rather than an observer on the periphery seemingly subject to
arbitrary verdicts. Observations made throughout the implementation of this process
led to the conclusion that when students become part of the assessment process, not
only is autonomous learning encouraged, but the realm of the educative experience
becomes apparent to the students and then educative transformation can occur.
The most valuable outcome observed during this process was that students were
engaging in discourse, both with their peers and with their lecturers, with regard to
course outcomes. Anecdotal feedback in the form of student comments after the
process, student emails and discussion board postings following the ASPAL process
Downloaded by [Deakin University Library] at 19:43 08 January 2013
14
S. Kearney
illustrated that students were not only reading their own work before handing it in
but were also checking each others’ work and providing formative feedback to their
peers. Students also reported being able to better gauge the quality of their work
before submitting it, because they had seen others’ work and understood the assessment criteria. Based on my observations, students were fully engaged in the processes of creating criteria, and marking and providing feedback for the task, which
seemed to enhance their ability to learn autonomously. The most significant observation came from the anecdotal feedback, specifically in discussion board postings,
that students were able to successfully reflect on the value and quality of the learning that took place, an indicator of the skills necessary for sustainable learning.
In returning to the driving questions for this research: why do we assess? For
whose benefit do we assess and, are these achieved through current practices? The
following answers, based on the implementation of AASL and ASPAL, are proposed:
• We assess firstly to inform student learning and secondly to evaluate that
learning against a set of standards or outcomes. In ensuring that student learning is paramount and evaluation secondary, the traditional focal point of
assessment is changed from assessment of learning (summative) to assessment
for learning (formative).
• It is contended that assessment, and indeed all aspects of the teaching and
learning continuum, should be directed towards engaging students in the professional discourse of the course they are undertaking. Therefore, we assess
for the benefit of the student and ourselves, not exclusively in an evaluative
way, although this is one component of assessment, but rather to engage students in the authentic manifestation of their course with regards to the world
outside of the classroom.
• Whether or not these outcomes are being achieved by current practice is open
to conjecture. However, what is clear, both in experience and in the literature,
is that while innovative forms of assessment have been gaining traction in
recent years, there is still a significant sector of the tertiary community that
has not relinquished its hold on traditional assessment practices that have the
potential to disengage students and have a negative impact on their learning
(James, McInnis, and Devlin 2002).
Implementing AASL through ASPAL requires what Singh and Terry (2008) call
‘profound shifts’ in the conceptualisation of assessment at tertiary level (402). It is
the contention of the author that those ‘conceptual shifts’ are imperative for the tertiary sector to thrive in the twenty-first century.
Acknowledgement
Timothy Perkins, a lecturer in primary education at the University of Notre dame Australia
was the co-developer of the AASL and ASPAL models and was instrumental in the
development and application of the AASL and ASPAL processes.
Notes
1. Unit refers to the specific class they are enrolled in, in this case ED1724 Key Learning
Area Mathematics 1.
Assessment & Evaluation in Higher Education
15
2. The University of Notre Dame Australia uses a five-tier grading rubric ranging from
High Distinction to Fail.
Notes on contributor
Sean Kearney is a lecturer in education at the University of Notre Dame Australia, Sydney.
He has recently been awarded his PhD on induction programmes for beginning teachers in
independent schools in NSW and has a MEd in Educational Leadership from the University
of Wollongong. He has published and presented papers in international conferences and
journals.
Downloaded by [Deakin University Library] at 19:43 08 January 2013
References
Andrade, H., and Y. Du. 2007. Student responses to criteria-referenced self-assessment.
Assessment & Evaluation in Higher Education 32, no. 2: 159–81.
Andrade, H., and A. Valtcheva. 2009. Promoting learning and achievement through selfassessment. Theory into Practice 48, no. 1: 12–9.
Australian Government. 2009. Transforming Australia’s higher education system. Commonwealth of Australia. http://www.deewr.gov.au/HigherEducation/Documents/PDF/Additional%20Report%20-%20Transforming%20Aus%20Higher%20ED_webaw.pdf.
Black, P., and D. William. 1998. Inside the black box: Raising standards through classroom
assessment. Phi Delta Kappan 80: 139–48.
Bloxham, S., and P. Boyd. 2007. Developing effective assessment in higher education: A
practical guide. Maidenhead: Open University Press.
Bloxham, S., and A. West. 2004. Understanding the rules of the game: Marking peer-assessment as a medium for developing students’ conceptions of assessment. Assessment &
Evaluation in Higher Education 29, no. 6: 721–33.
Boud, D. 1990. Assessment and the promotion of academic values. Studies in Higher
Education 15, no. 1: 1–13.
Boud, D. 2000. Sustainable assessment: Rethinking assessment for the learning society.
Studies in Continuing Education 22, no. 2: 151–67.
Boud, D. 2010. Assessment 2020: Seven propositions for assessment reform in higher education. Sydney: Australian Learning and Teaching Council.
Boud, D., R. Cohen, and J. Sampson. 1999. Peer learning and assessment. Assessment &
Evaluation in Higher Education 24, no. 4: 413–26.
Boud, D, and N. Falchikov. 2005. Redesigning assessment for learning beyond higher
education. In Higher Education in a Changing World. Proceedings of the 28th HERDSA
Annual Conference, July 3–6, in Sydney, Australia. 34.
Bradley, D., P. Noonan, H. Nugent and B. Scales. 2008. Review of Australian Higher Education: Final Report. Commonwealth of Australia. http://www.deewr.gov.au/HigherEducation/Review/Documents/PDF/Higher%20Education%20Review_one%20document_02.
pdf.
Brint, S., A.M. Cantwell, and R.A. Hanneman. 2008. The two cultures of undergraduate
academic engagement. Research in Higher Education 49: 383–402.
Carroll, T. 2005. Induction of teachers into 21st century learning communities: Creating the
next generation of educational practice. The New Educator 1, no. 3: 199–204.
Clark, N., P. Davies and R. Skeers. 2005. Self- and peer-assessment in software engineering
project. Proceedings of the Seventh Australasian Conference on Computing Education,
in Newcastle, Australia.
De Grez, L., M. Valcke, and I. Roozen. 2012. How effective are self- and peer-assessment
of oral presentation skills compared with teachers assessment? Active Learning in Higher
Education 13, no. 2: 129–42.
Dochy, F., and L. McDowell. 1997. Introduction: Assessment as a tool for learning. Studies
in Educational Evaluation 23, no. 4: 279–98.
Dochy, F., M. Segers, and D. Sluijsmans. 1999. The use of self-, peer- and co-assessment in
higher education: A review. Studies in Higher Education 24: 331–50.
Donald, J.G., and D.B. Denison. 2001. Quality assessment of university students: Student
perceptions of quality criteria. The Journal of Higher Education 72, no. 4: 478–502.
Downloaded by [Deakin University Library] at 19:43 08 January 2013
16
S. Kearney
Elwood, J., and V. Klenowsky. 2002. Creating communities of shared practice: The
challenges of assessment use in learning and teaching. Assessment & Evaluation in
Higher Education 27, no. 3: 243–56.
Falchikov, N. 1986. Product comparisons and process benefits of collaborative peer group
and self-assessment. Assessment & Evaluation in Higher Education 11: 146–66.
Falchikov, N., and D. Boud. 1989. Student self-assessment in higher education: A meta-analysis. Review of Educational Research 59, no. 4: 395–430.
Falchikov, N., and K. Thomson. 2008. Assessment: What drives innovation? Journal of
University Teaching and Learning Practice 5, no. 1: 49–60.
Fulton, K., I. Yoon and C. Lee. 2005. Induction into learning communities. Washington,
DC: National Commission on Teaching and America’s Future.
Graff, G. 2009. Why assessment? Pedagogy: Critical Approaches to Teaching Literature,
Language, Composition, and Culture 10, no. 1: 153–65.
Hafner, J., and P. Hafner. 2003. Quantitative analysis of the rubric as an assessment tool: An
empirical study of student peer-group rating. International Journal of Science Education
25, no. 12: 1509–28.
Hanrahan, S.J., and G. Isaacs. 2001. Assessing self- and peer-assessment: The students’
views. Higher Educational Research and Development 20, no. 1: 53–70.
Hattie, J. 2003. Teachers make a difference: What is the research evidence? Paper presented
at the Australian Council for Educational Research Annual Conference on Building
Teacher Quality, in Melbourne, Australia.
James, R., C. McInnis, and M. Devlin. 2002. Assessing learning in Australian Universities.
The University of Melbourne: Centre for the Study of Higher Education.
Klenowski, V. 2006. Editorial: Learning oriented assessment in the Asia Pacific region.
Assessment in Education 13, no. 2: 131–4.
Lamprianou, I., and J. Athanasou. 2009. A teacher’s guide to educational assessment.
Revised edition. Rotterdam: Sense.
Lave, J., and E. Wenger. 1991. Situated learning: Legitimate peripheral participation.
Cambridge: Cambridge University Press.
McDonald, B., and D. Boud. 2003. The impact of self-assessment on achievement: The
effects of self-assessment training on performance in external examinations. Assessment
in Education: Principles, Policy & Practice 10, no. 2: 209–20.
McGraw-Hill. 2008. 21st century approach: Preparing students for the global workplace,
CTB McGraw-Hill. http://www.ctb.com/ctb.com/control/researchArticleMainAction?topicId=429&articleId=469&p=ctbResearch.
Ng, R.J., and J.K. Earl. 2008. Accuracy in self-assessment: The role of ability, feedback,
self-efficacy and goal orientation. Australian Journal of Career Development 17, no. 3:
39–50.
O’Malley, G. 2010. Designing induction as professional learning community. The
Educational Forum, 74, no. 4: 318–327.
Ozogul, G., and H. Sullivan. 2007. Student performance and attitudes under formative evaluation by teacher, self- and peer-evaluators. Education Technology Research and Development. doi: 10.1007/s11423-007-9052-7.
Raban, R., and A. Litchfield. 2007. Supporting peer-assessment of individual contributions
in groupwork. Australasian Journal of Educational Technology 23, no. 1: 34–47.
Rivers, W. 2001. Autonomy at all cost: An ethnography of metacognitive self-assessment
and self-management among experienced language learners. The Modern Language
Journal 85, no. 2: 279–90.
Segers M., F. Dochy and E. Cascallar. 2003. The era of assessment engineering: Changing
perspectives on teaching and learning and the role of new modes of assessment. In
Optimising new modes of assessment: In search of qualities and standards, ed. M.
Segers and E.A. Dordrecht, 1–12. Dordrecht: Kluwer Academic.
Siemens, G. 2004. Learning management systems: The wrong place to start learning. http://
www.elearnspace.org/Articles/lms.htm.
Singh, K., and J. Terry. 2008. Fostering students’ self-assessment skills for sustainable
learning. In Sustainability in higher education: Directions for change, Edu-Com 2008
International Conference, November 19–21, in Khon Kaen, Thailand.
Downloaded by [Deakin University Library] at 19:43 08 January 2013
Assessment & Evaluation in Higher Education
17
Skinner, E.A., T.A. Kinderman, and C.J. Furrer. 2009. A motivational perspective on
engagement and diaffection: Conceptualization and assessment of children’s behavioural
and emotional participation in academic activities in the classroom. Educational and
Psychological Measurement 69, no. 3: 493–525.
Shepherd, L.A. 2000. The Role of Assessment in Learning Culture. Educational Researcher
29, no. 7, 4–14.
Stallings, V., and C. Tascione. 1996. Student self-assessment and self-evaluation.
Mathematics Teacher 89: 548–55.
Stembourg, R.J. (2008). Assessing What Matters. The Best of Educational Leadership 2007–
2008, 20–26.
Stiggins, R.J. 2001. Student-involved classroom assessment. 3rd Ed New Jersey, NJ: Merrill/
Prentice Hall.
Tait-McCutcheon, S. and B. Sherley. 2006. In the hands of the learner: The impact of selfassessment on teacher education. MERGA 2006 Conference Proceedings. 353–59, Canberra.
Taylor, J.A. 2008. Assessment in first year university: A model to manage transition. Journal
of University Teaching and Learning Practice 5, no. 1: 19–33.
Thomas, G., D. Martin, and K. Pleasants. 2011. Using self- and peer-assessment to enhance
students’ future learning in higher education. Journal of University Teaching and
Learning Practice 8, no. 1: 1–17.
Topping, K. 1998. Peer-assessment between students in colleges and universities. Review of
Educational Research 68, no. 3: 249–76.
Topping, K.J. 2007. Trends in peer learning. Educational Psychology: An International
Journal of Experimental Psychology 25, no. 6: 631–45.
Vu, T.T. and Dall’Alba, G. 2008. Exploring an authentic approach to assessment to enhance
student learning. In Re-imagining higher education pedagogies. Proceedings of the
AARE Conference, Brisbane, Australia.
Wenger, E. 1998. Communities of practice, learning, meaning and identity. Cambridge:
Cambridge University Press.
Wiggins, G. 1998. Educative assessment: Designing assessments to inform and improve student performance. Portland: Calendar Islands.
Wiggins, G., and J. McTighe. 2005. Understanding by design. Alexandria, VA: Association
for Supervision and Curriculum Development.
Zariski, A. 1996. Student peer-assessment in tertiary education: Promise, perils and practice.
In Teaching and learning within and across disciplines, ed. Abbott and Willcoxson,
189–200, Perth: Murdoch University.
Download