Conference themes: - University of South Australia

advertisement
Evaluation and Assessment Conference
Paper Title:
Application of ICT and Rubrics to the Assessment process where
professional judgement is involved: The development of an emarking tool is described.
Alistair Campbell
Edith Cowan University
a.campbell@ecu.edu.au
Abstract:
Until recently the application of technology to the assessment process has focused on the student side of
assessment. This paper focuses on the application of technology to the lecturing side of the assessment process,
specifically where professional judgement is involved.
A prototype of an e-marking tool that supports the marking/moderation process is explained and described. This
tool moves the marking/recording sheet off the desk and onto the desktop (computer screen). The aim of the emarking tool is to reduce the unproductive work of adding up and recording of comments and marks and allows
for many different views of the information to be printed, and to increase the time spent on feedback, reflection
and moderation The tool combines features of the word processor, spreadsheet and database applications, and
the paper-based marking sheet. The final version would be accessible via the web or laptop and allow anytime
anyplace recording and comparison of marks. A number of working examples of the tool are described and
discussed.
In the moderation process of the marking key the tool could be used to develop and refine a rubric-based scale
based on the assessment criteria. The tool could also record feedback about the quality of the marking key
during and after marking. While in the moderation process of marking the tool could be used to instantly
comparing marks with those of the co-ordinator’s. The e-marking tool could be used in all stages of assessment
from developing the marking key to marking and evaluating the results and printing out results and student
feedback.
Keywords: assessment, rubric, moderation, ICT, and quality assurance
1 of 10
Introduction
Information Communication Technologies (ICT) have tended to focus on the student side of
assessment while giving little regard to the use of technology in assessment by lecturing staff.
Marking electronically (e-marking) is no longer a technological possibility in the near future
but is a reality now. The e-marking tool addresses this imbalance by focusing on the lecturer
rather than the learner in the assessment process. This paper describes and explains the
features of an e-marking tool and provides a number of sample assessment applications. The
tool augments the marking/moderation aspect of the assessment process and is designed to
reduce the busy work of marking, allow more time for quality feedback and allow
collaborative marking and moderation to more easily occur. The tool is under development
and combine features of a spreadsheet, a word processor and a database and uses a detailed
rubric-based marking key. The marking aspect of the tool gives access to an on-screen
marking sheet and other views of the information, while the moderation side of the tool gives
access to electronic exemplars of work and assistance, allowing markers to compare their own
marks with that of the co-ordinator’s instantly.
Assessment
In a comprehensive review of the literature in the area of assessment that covered 681
publications, Black (1998) found that the term ‘assessment’ was not tightly defined with not
one widely accepted meaning. While “some educationists do not distinguish between
assessment and evaluation” (Miller, Cox, & Imrie, 1998, p 3), Miller goes on to define
assessment as any “means by which students’ progress and achievement are measured,
recorded and communicated to students and relevant university authorities” (p. 4).
Furthermore, research into assessment (Knight, 2002) and management of assessment in
Higher Education (Yorke, 1998) has found that these areas have been neglected in terms of
both research and funding.
This neglect is coming to an end with the increase in accountability and workload of lecturers.
The subsequent growing demand for reliability and validity in the allocation of grades and
levels places a heavy demand on the assessment process in order to satisfy learners and
university and other official bodies (Freeman & Lewis, 1998), and on the coordinator and
tutors to achieve these outcomes. The following reasons are typically given to justify why an
assessment needs to be accurate and reliable (Fry, Ketteridge, & Marshall, 1999; Preston &
Shackelford, 1999):
1. It is useful and fair to students;
2 of 10
2. It serves internal and external quality assurance purposes; and
3. It constitutes a protection against the increasingly likely legal challenges from
disaffected students.
Unreliability in the assessment process can be due to inconsistency of individual assessors
(poor intra-marker reliability) or inconsistencies across assessors (poor inter-marker
reliability). Thus the fewer the assessors the easier it is to control the reliability factor.
However, even with one marker, strategies need to be developed to ensure reliability.
Although the literature discusses a number of strategies to improve reliability, such as doublemarking (Brown & Knight, 1994) or using a list of criteria (Miller et al., 1998), they are often
costly, poorly implemented or not at all in higher education (Boud, 1995, 1995a; Fry et al.,
1999), and until recently it seems most of the assessment of essays has been by “academic
instinct” (Fry et al., 1999, p. 63).
The application of e-marking and rubrics to these unreliability issues, cost factors and other
issues such as management and efficiency in the assessment process are explored in the
following description of the e-marking tool, which also includes examples of assessments
developed so far based on a rubrics design.
E-marking
ICT is changing the traditional learning paradigm and the assessment process. The boundaries
between teaching, learning and assessment are blurring. Educators, trainers and administrators
are slowly turning their attention to e-assessment or Computer-Assisted Assessment (CAA) as
computers, laptops and web access become part of the educational environment (Bull &
Sharp, 2000; Preston & Shackelford, 1999). The first area of assessment that took advantage
of CAA was objective-type assessments. There are now many products on the market offering
this type of assessment. For example, Question Mark Computing (http://www.qmark.com/)
and WebMCQTM (http://www.webmcq.com/), are at the forefront of the move towards
computerisation of tests and assessments, and provide services enabling the creation and
presentation of interactive questions via the Internet. They are at present ideal for objectivebased assessment, such as revision quizzes, formative exams, training packages and
questionnaires.
The e-marking tool goes beyond this limited view of CAA and focuses on the lecturer using
CAA technology to aide in the assessment process rather than the learner being assessed by
using CAA technology.
3 of 10
This tool has initially been designed for high stakes assessments where:

professional judgement is required (high level of subjectivity),

a formal marking key is available, and

more than one marker or a large group of students or both are involved.
However as this prototyping and development continues, it will be possible to apply the tool
to both high and low stakes assessments. Also the time needed to convert a paper-based
marking sheet to an e-marking one will be significantly reduce.
Unlike most CAA that do not require any professional judgement on the part of the marker,
the e-marking tool has been designed for assessments where professional judgement is a
major component. The tool takes the existing marking methods where professional judgement
is involved and by applying technology to the process of marking, augments the assessment
process. This augmentation results in an improvement in the fairness, consistency, reliability,
transparency and quality assurance of the assessment process.
Application of Rubric to e-Marking
In the early stages of the development of the e-marking tool a review of the literature on
assessment marking found that very little work had been done in the area and the main focus
was on the development and application of rubrics. Some have argued that the use of the term
needs to be more rigorously defined to allow for a more intelligent discussion of “what a
rubric is as well as its purpose in the realm of assessment” (Wenzlaff, Fager, & Coleman,
1999). Rubrics can take many forms and levels of complexity, however they tend to have a
number of criteria that measure performance, behaviour or quality, and these criteria contain a
range of indicators that are described in detail showing the different levels of achievement
that need to be reached to obtain this level or grade (Coffin, 2002; McCollister, 2002;
Montgomery, 2000, 2002). An example of a part of a rubric developed for the e-marking tool
and used to assess a tutorial paper is shown in Figure 1. All the developed rubrics were
given to the students beforehand and this allowed them to know in detail what was
required to obtain the different grades in each criterion.
The e-marking tool has presently been applied to five types of assessment: oral presentation,
poster, essay, tutorial paper, tutorial presentation concept map and exams. All the rubric
assessment samples have gone through a number of review and development stages with
some having both student and tutor involvement. In the early stages the criteria headings were
developed and later the indicators were developed based on the four grades of Pass (50 to
4 of 10
60%), Credit (60 to 70%), Distinction (70 to 80%) and Higher Distinction (>80%). In later
versions of the rubric only the indicator grades are shown, with the total mark and final grade
shown at the top. The indicator grade was the mid-point of the grade eg a Pass grade would be
55%.
Figure 1: part of an Essay Rubric showing some criteria and grade indicators.
e-Marking Tool Explained
At present the application of ICT to the assessment process is limited and varies widely.
Spreadsheet applications are often used for collating of marks, while word processors are used
for recording comments and marks and reporting back to students or for just producing the
blank marking key. At a higher level, word processors are used for recording comments, and
annotation of electronic copies of assignments or portable document format (pdf) documents
are used for annotation. Unfortunately, these applications are rarely integrated: they are timeconsuming, high maintenance, not efficient for large groups, etc.
The e-marking tool moves the marking/recording sheet off the desk and onto the computer
screen by combining and integrating the features of spreadsheet, word process and database
applications. The information from the on-screen marking/recording sheet is recorded into a
database. Fields are designed to hold the marker’s name, student’s details, indicators/marks
and other related information. Marks or indicators and even feedback comments can be
inserted by means of selecting and clicking on them with the mouse without the need to use
the keyboard. The tool adds up the marks/indicators and, if required, expresses the total as a
percentage and/or grade. The information stored can be easily managed, manipulated and
displayed in different formats that previously would have required a time-consuming (data
5 of 10
entry) and complex integration of spreadsheet and word processor applications. These
management, manipulation and display features are a key component of the e-marking tool
that enhances and eliminates these processes involved in assessment.
Often in the marking process a generic comment/s needs to be added to a number of
assignments. The tool allows for generic comments to be easily added and selected when
required. Full editing features are available within the tool and all feedback and comment
fields can be updated, spell-checked and reviewed before printing, eliminating the rewriting
of marks/recording sheets, and the possibility of any being miss-placed. Finally, a student
version can be printed showing as much or as little detail as required, and these can be sorted
by tutorial group and student’s last name or any order that is required before printing.
In the moderation mode, a similar process is followed but exemplars are used that have
previously been marked by the co-ordinator. Two to three exemplars are marked, after each
marking the co-ordinator’s marks and comments are shown below that of the marker’s. After
the marker has reflected on the results, the next exemplar is marked. At the end of the process,
the marker will be within the coordinator’s agreed range of marks and will be ready to
commence marking or will know what aspects of the marking key need to be discussed and
clarified with the coordinator.
e-Marking Tool in Use
A number of screen images will be used to highlight the features of the e-marking tool: from
tutors recording marks, a spreadsheet view etc, to a sample of a student printout or a
comparison of the tutors’ range of marks. In Figure 1 when the tutor selects a grade, the mark
attached to that grade is added to the total. The spreadsheet view of the data is shown in
Figure 2. The three totals are from left to right, final mark, rounded mark and original mark
(Note: the difference between final mark and the rounded mark is due to some students
handing assignments in late). Figure 3 shows the tutor’s marking screen, with both the student
peer group assessed grades/mark that was recorded via the web (as shown in Figure 4) and the
tutor’s grade above. Figure 5 shows both tutor and peer group marks and comments. Once the
tutor has entered the grades, the grades are automatically converted to a mark and collated.
6 of 10
Figure 2: Spreadsheet display showing only tutors’ marks.
Figure 3: Tutor recording screen, showing peer group mark.
7 of 10
Figure 4: Web recording screen, showing peer group grades, mark and comment.
Figure 5: Another view of the information, this shows the tutors grade marks, total and
comments compared to those of the peer groups.
8 of 10
Concluding Comments
The e-marking tool is still being developed and refined, as are the assignment rubrics. Both
tutors and students have enthusiastically embraced the development and use of the rubrics,
and their feedback has been very positive. The tutors have found the tool does allow them to
work more efficiently and removes much of the unproductive aspect of marking. The tutors
feel that they mark more consistently, reliably and efficiently using the combination of rubrics
and the e-marking tool, while the students receive better and more consistent feedback.
The combination and integration of the rubric marking system and e-marking tool is blurring
the boundaries between teaching, learning and assessment. The implementation of e-marking
to the assessment process opens up possibilities that were not possible when everything was
done by hand such as anytime, anyplace marking, synchronous and asynchronous moderation,
and self and peer assessment.
9 of 10
References
Black, P., & Wiliam, D. (1998). Assessment and Classroom Learning. Assessment in
Education, 5(1), 7-74.
Boud, D. J. (1995). Enhancing learning through self assessment. London ; Philadelphia:
Kogan Page.
Boud, D. J. (1995a). Assessment and learning: contradictory or complementary? In P. Knight
(Ed.), Assessment for Learning in Higher Education (pp. 35-48). London: Kogan
Page.
Brown, S., & Knight, P. (1994). Assessing learners in higher education. London ;
Philadelphia: Kogan Page.
Bull, J., & Sharp, D. (2000). Developments in computer-Assisted Assessment in UK Higher
Education. Paper presented at the Learning to Choose: Choosing to learn, QLD
Australia.
Coffin, D. G. (2002). Saving time with a rubric buffet. Strategies, 16(Sep/Oct), 5.
Freeman, R., & Lewis, R. (1998). Planning and implementing assessment. London: Kogan
Page.
Fry, H., Ketteridge, S., & Marshall, S. (1999). A handbook for teaching and learning in
higher education : enhancing academic practice. London: Kogan Page.
Knight, P. T. (2002). The Value of a Programme-wide Approach to assessment. Assessment
& Evaluation in Higher Education, 25(3), 237 to 251.
McCollister, S. (2002). Developing criteria rubrics in the art classroom. Art Education,
55(Jul), 46.
Miller, A. H., Cox, K., & Imrie, B. W. (1998). Student assessment in higher education : a
handbook for assessing performance. London: Kogan Page.
Montgomery, K. (2000). Classroom rubrics: systematizing what teachers do naturally. The
Clearing House, 73(Jul/Aug), 324.
Montgomery, K. (2002). Authentic tasks and rubrics: going beyond traditional assessments in
college teaching. College Teaching, 50(Winter), 34.
Preston, J., & Shackelford, R. (1999). Improving On-Line Assessment: an Investigation of
Existing Marking Methodologies. Paper presented at the ITICSE.
Wenzlaff, T. L., Fager, J. J., & Coleman, M. J. (1999). What is a rubric? Do practitioners and
the literature agree? Contemporary Education, 70(Summer), 41.
Yorke, M. (1998). The Management of Assessment in Higher Education. Assessment &
Evaluation in Higher Education, 23(2), 101 to 116.
10 of 10
Download