On-line and Objective Testing: Frequently Asked Questions and

advertisement
On-line and Objective
Testing:
Frequently Asked Questions
and Concerns
Hugh Davis
and
the e3an Project Team
What are the advantages?











Lecturers can assess students more frequently
Students can monitor their own progress
Students can get detailed and timely feedback
Speed of testing can ensure wider syllabus coverage
Tests can incorporate wide variety of media
Tests can be taken at a distance
Using randomized tests, tests can be taken at different times
Not prone to human error - objective
Saves staff time (in long run)
Diagnostic reports and analysis
Links to student management systems?
What types of question can we use?







Multiple Choice
Multiple Response
Text Match
Fill in Blanks
Numeric
Selection (for each statement select yes/no, true/false, etc)
Hotspot (click in a graphic)
And many others….
Doesn’t it take too long to construct useful
objective questions?



Yes if you try to do it all yourself
That is the advantage of a Testbank – you can start with a
collection of hopefully appropriate questions
You can alter the questions as needed and add your own
questions
Who made the questions in the database and how
can I be sure of their validity?


The questions were made by practicing academics
The topics to be tested within any particular theme were
chosen by the theme working group
The questions have all been peer reviewed
The questions will be tested and updated as necessary

You can always add your own questions


Who owns the copyright of questions on the
database?



The person who wrote them retains copyright.
By allowing the questions into the database the owner of the
copyright allows other academics to use the questions freely
for the purpose of student testing
Questions may not be re-published or sold.
How will the questions be distributed to me?



The database will be available on-line – initially hosted at
Southampton, and accessible by registered users
The database will also be distributed to registered academics
on CD with suitable run-time software.
Questions may be imported to your favourite system or
printed, or viewed directly in html.
What will happen to the database after the
project finishes?


The worst case is that you will have a copy on CD, which you
may continue to grow on your own questions.
e3an are actively researching models for continuation




IEE
Publisher
Subscription
Continuation funding
Will I be able to interchange my questions
between systems – or will all my work be lost
when the current software goes out of date?




We will be keeping the questions in an XML format (IMS QTI)
QTI is a generic way of representing questions
Many packages are implementing import QTI functionality
New packages are likely to use QTI as their data format.
My university QA will not allow use of objective
tests for examinations



Start by using them for courseworks / labs / example classes
Use them for formative assessments
Move to a more realistic university ;-)
What happens if students get hold of the whole
database?



They might learn how to answer all the questions you wished
to ask. Would this be so bad?
Realistically there are too many questions for this to be an
issue – it would be more of a problem if they got hold of the
exact test you were about to set.
Most packages allow you to randomise the numeric data in a
question or to randomise the questions themselves (order of
answers – or selection of questions)
The trouble with Objective Questions is that
they tend to address factual content
encouraging surface learning rather than
underlying concepts and it is difficult to test
deeper understanding (synthesis, analysis,
evaluation).



Maybe true (although there is much in the literature
attempting to demonstrate otherwise)
But how much of your current assessment deals with higher
order skills?
Objective tests may only be part of the total assessment
strategy
Why are you including traditional exam style and
short answer questions in the database? Surely
we can’t deliver these on-line?




Not everything is easily tested using objective tests
Students may benefit from practice on exams
Staff may benefit by looking at other people’s exams
Although we cannot mark these questions automatically, we
can use them to set example sheets – and release the
example answers at a suitable time, all on-line if we wish.
Students may get right answer for wrong reason reinforcing misconceptions



Possibly true
Maybe this happens in other assessments too?
Hopefully the larger number of questions will lead to the same
learning matter being tested in more than one way.
What happens if students guess - "Multiple Guess
Test"




There is considerable research to demonstrate that marks
gained from such tests correlate very well with marks gained
from other assessment methods
If the test is well designed guessers will end up getting 1 in n
questions right – where n is the number of choices available
It may be useful to scale the final marks in someway to
remove the 20 – 25% that might be available by guessing –
but not much evidence to support this
Negative marking wrong answers is generally discredited.
Aren’t the students already over assessed?



Probably they get too much summative assessment (things
that are formally marked)
But how much formative assessment (where they can check
their own progress and get feedback) do they get?
Objective tests are excellent for allowing students (and staff)
to check their progress at timely moments during a course –
in time to allow action to be taken.
There is too high a reliance on stability of
technology! What happens if the computers fail
during an exam?




Doing formal examinations at computers may be problematic
for all sorts of reasons – although there are institutions that
are brave enough to do this.
Multiple Choice questions on paper along with an optical mark
reading package to scan the results introduces some extra
effort, but probably worthwhile for formal examinations.
For less formal testing situations, most delivery systems allow
for a test to be restarted where technology has failed.
The gains of using technology generally outweigh the
problems
Do students have required IT skills?



To drive a browser?
If they don’t, then do something about it quick!
Some familiarity with the delivery environment will generally
assist the students feel comfortable and thus to optimise their
performance
Do staff have required IT skills?




Often not.
Designing questions and setting up tests all need IT skills (in
addition to assessment skills) that need staff to be IT literate
In places where large scale CAA is successfully used there is
generally some in house technical assistance available
specifically for this purpose
Staff are intelligent. They will learn if they think it will benefit
them.
Does my institution have enough computers?


Maybe not if you are trying to sit the whole first year down to
formal exams at the same time (do you have enough
invigilators either?)
Is it essential that the whole cohort do your test at the same
time? – do they do their courseworks at the same time?
It takes too long to (pre-)organize for use of online testing.



This is one of the biggest problems.
In this area, significant pre-organisation can pay back
significantly when it comes to assessment while also
improving feedback to the student
But you have to do the work up-front.
When on-line, can’t the students cheat by looking
things up on the web or by conferencing with
other people?



This situation is no different to most courseworks, labs and
project work we set. Perhaps it is even desirable sometimes!
Secure browsers that prevent access to other sites and other
software are available where proof of individual effort is
required
But can you be sure that they do not have someone sitting
beside them when they do the test – especially distance
learners?
Download