Jordan, S. (2007) Interim Report Executive Summary

advertisement
Sally Jordan COLMSCT Interim report November 2007
Executive summary
Investigating the use of short free-text questions in online assessment
The project is investigating the use of computer-aided assessment for checking and providing
instantaneous feedback on questions requiring short free text answers, typically a sentence in
length. A linguistically based authoring tool is being used, provided by Intelligent Assessment
Technologies Ltd. (IAT) This enables answers such as ‘dog bites man’ to be recognised as
distinct from ‘man bites dog’.
The project aims to write and deliver questions to Open University students, investigating in
the process any difficulties in using the IAT authoring tool; to compare the computer’s marking
against that of human markers and a simple ‘bag of words’ computer system; and to evaluate
students’ reaction to this question type, and their use of the feedback provided. The project is
a joint one with Barbara Brockbank and it is supported by Phil Butcher.
In the first year of the project, to November 2007, 65 questions of this type have been written
and released to students on our level 1 Science Faculty course S103 : Discovering Science.
The questions have been released in seven separate formative interactive computer marked
assignments (iCMAs), in some cases accompanied by more conventional forms of interactive
online questions. Responses from students are being used to improve the questions and
answer matching and similar questions will be used summatively in the new course S104 :
Exploring Science, from February 2008.
Our existing questions, at various stages of development, are available for viewing at:
https://students.open.ac.uk/openmark/s103-07b.block3world/
https://students.open.ac.uk/openmark/s103-07b.blocks45world/
https://students.open.ac.uk/openmark/s103-07b.block7v1aworld/
https://students.open.ac.uk/openmark/s103-07b.block8world/
https://students.open.ac.uk/openmark/s103-07b.block9world/
https://students.open.ac.uk/openmark/s103-07b.block10world/
https://students.open.ac.uk/openmark/s103-07b.block11world/
We are now reasonably confident in the use the template-based authoring tool and are
beginning to show colleagues how they could use it to write questions for themselves, though
it is not yet clear whether it is best for academics to use the tool for themselves or for them to
work in close collaboration with an expert user. The most time consuming aspect of question
development is making amendments to the answer matching to enable the developmental
responses gathered from students to be accurately marked, though sometimes this can be
done by simply adding an addition synonym or word replacement.
One of the most impressive features of the authoring tool is its ability to recognise negatives
and double negatives. So for a particular question, responses of the type ‘the forces are
balanced’ and ‘there are no unbalanced forces acting’ will both be accurately marked as
correct whilst ‘the forces are not balanced’ and ‘the forces are unbalanced’ will both be
accurately marked as incorrect. The system also copes well with poor grammar and spelling.
We are able to provide targeted feedback in response to incorrect and partially correct
student answers. So if a student gives the answer ‘The forces are equal’ in response to a
question where one of the correct answers is ‘The forces are equal and opposite’, they will be
told that they are on the right lines, and given a reference to the relevant section of the course
material. Targeted feedback on incomplete answers was not initially possible, because of
interference between ‘accept’ and ‘do not accept’ model answers, but we are grateful to Dr
Tom Mitchell of IAT for adapting the software to accommodate our needs.
Students who use the questions report finding them useful and enjoyable, with comments
such as the following being common:
I just took the test and I must say I was very impressed. Everything went well; I had no access
problems and the feedback was very useful. I was amazed at how it recognized my answers
to those questions that required a descriptive response. E.g. "How would you recognize a
layered rock as sedimentary?"). There must be so many different ways of wording a correct
answer to such a question. More please!
However, student uptake of the iCMAs has been disappointing, a finding common for
formative only use of assessment of this type, especially when the assessment is not properly
embedded within the course. This is unfortunate, since student responses are essential in
order to develop questions and answer matching of sufficient robustness for summative use.
Responses provided by colleagues and friends do not display the same characteristics as
those from real students on the course. Therefore encouraging student use in the final
presentation of S103 (October 2007 – June 2008) is a high priority.
We have observed and videoed six S103 students attempting our questions and early
analysis points towards interesting conclusions about the students’ perceptions of the
questions (most seemed to assume that we were using a simple keyword system, but the way
they reacted to that in framing their answers varied considerably from student to student).
There will also be conclusions to be drawn relating to student use of the feedback provided.
For the seven most well developed questions, the computer’s marking of student responses
has been compared with that of six course tutors. Early analysis is very encouraging, with the
computer’s marking of most questions being indistinguishable from that of the human
markers.
Sally Jordan
Centre for Open Learning of Mathematics, Science Computing and Technology (COLMSCT)
Teaching Fellow
12th November 2007
Download