Additional Guidelines for Questionnaire Design and Use

advertisement
Guidelines for Questionnaire Design and Use
This document offers some ideas on:
 How to use questionnaires
 What questionnaire results may or may not do for you
 What questionnaires may or may not do for students
1. Why use questionnaires?
There are a number of reasons, not all obvious, for using questionnaires, and a number of
different bodies that might want to use them.
Those who




might want questionnaires to be used include:
you - administered by you
your School - administered by you or on your behalf
the student body
someone else – for example, the Quality Assurance Agency
Questionnaires might be used to obtain information and views, or to attempt to
justify/quantify impressions. Reasons for wanting such information include:
 statistical information, perhaps to meet an external requirement, or to inform e.g.
admissions policy
 'research' - finding out what students think or do, possibly for academic
publication, but not intending change at present
 'feedback' - to help you change the way you do things
 information to help you argue for change in your School or elsewhere
A further purpose is to aim to change the students' perceptions by:
 making yourself more approachable
 making the student more aware and critical of the teaching/learning process
Questionnaires have the advantages of:
 providing a useful method of obtaining information in a structured format
 respecting privacy of individual views and
 not requiring the direct intervention of an interviewer
However, they have the disadvantage of being a substitute for genuine dialogue, with a
limited scope in both questioning and responding, although this is overcome somewhat by
the use of open questions (of course, genuine dialogue may still take place as well). If
questionnaires are designed and administered carefully, most people would accept that they
provide reasonably objective and reliable information in a cost-effective way for large student
groups. They are popular because there is really no practical alternative technique which
satisfies these requirements and can be implemented practically within the University context.
2. Online versus paper-based questionnaires
Why has the University decided to support only web-based student questionnaires?
There are several reasons:
 The cost of producing, administering and processing paper feedback forms is
high
 The optical scanner system which has been used to process forms hitherto is
now old, and will soon become obsolete
 The University has invested heavily in computer clusters, so access to online
surveys is good
Page 1 of 8
Document content updated in: November 2006
D:\106754530.doc
 Online feedback is secure, more easily adapted to particular requirements and
does not use up teaching time
 Online feedback produces higher quality qualitative responses
There are, however, two disadvantages from online feedback:
 It is vulnerable to technical problems
 It produces lower response rates than paper forms filled in during lecture time
(35-50% is considered acceptable for online surveys)
In making the decision to support online surveys only, the University is privileging the quality
of qualitative responses over high response rates. If a system could guarantee both high
response rates and good quality qualitative responses, it would have been adopted!
There is a danger of over-analysis of the quantitative data produced from student feedback
forms, and this danger is heightened if response rates are quite low. However, the
quantitative information can still be used profitably in module assessments, even given the
caveats above. (See Interpreting Your Results at the end of this document).
Lecturers are free to use paper questionnaire forms if they so wish, and it will be possible to
prepare a questionnaire using Blackboard, but then to print this out as a paper-based
questionnaire. Please note that, other than providing a pool of questions and this guidance
document, there will be no support from the University for paper-based questionnaires.
3. What effects does questionnaire use have on students?
An experienced interviewer knows how to begin and conduct an interview to obtain the
information desired. A questionnaire is a form of interview, and therefore the perception of
the student when answering the questions may be significant in determining their responses,
and may also influence their attitude to future surveys.
You may believe that simply to invite comment on your 'performance' is a mistake, as it
implies that you are the problem with any failures in the students’ achievements. Some
lecturers complain that they'd like to relate the students’ views to attendance, ability or
results, which is difficult in view of the usual confidentiality.
However, if the first few questions asked address the student's own input and attitude (e.g.
what proportion of classes did you attend?, how many hours per week ...?, how much did
you use textbooks?, etc.) then, apart from the information you gain, which may be correlated
with their other responses (e.g. how difficult?, how interesting?), you may influence their
attitude in commenting on your presentation or on the content of the module. The widelyused standard questionnaire (see Blackboard’s “General Purpose Questionnaire”) uses this
approach.
It is possible that the regular use of questionnaires also has a significant effect on the
respondents' attitudes and perception in other contexts, such as in lectures and tutorials.
Apart from the influence on questionnaire responses themselves (see previous paragraph),
we could be unwittingly reinforcing the belief that 'if we were perfect teachers, they'd be
perfect learners'. If we wish to avoid this possibility, then how we present questionnaires, as
well as the type and order of questions asked, may need to be considered.
It is possible that in some cases students can become more aware and critical of the whole
learning and teaching process; this would be the ideal outcome!
Page 2 of 8
Document content updated in: November 2006
D:\106754530.doc
4. What can be changed in response to questionnaire feedback?
You may be obtaining information to inform yourself or others, or to reinforce an argument,
etc.
For undergraduate teaching - the main use of questionnaires - you might consider:
 what can be changed
 what can't be changed
 how much you can change things
 how much difference it's likely to make
 what implications there are for changes, in resources or curriculum
 whether you are willing to change
 are you going to monitor the effect(s) of the changes?
Teaching delivery
Examples of things you can change (in principle) in your own style and delivery include:
 introductory, concluding and linking material can be added or extended
 the order of topics can be changed
 the choice of examples can be considered
 audibility and legibility aren't easy to change, but must be considered
 use of handouts - more handouts can be useful sometimes, but if not integrated
with lecture notes, they can be less clear, and can also be an excuse to cover too
much material
 quantity and type of examples, book lists, etc.
However, it is very difficult to change your individual style and manner, and it may not even
be desirable, unless that style offends or distracts. There is no one style that is 'best'. Only a
few generalisations are possible about style, apart from the obvious ones of audibility and
legibility. Your own enthusiasm for the subject is always a positive point. Conversely, evident
boredom, or even worse, contempt for the student, will almost certainly have a negative
effect on attitudes and future attainment if not on the results for that module. Naturally, if
you do have any irritating personal habits, they may be worth changing! Overall, it is better
for students to see a range of styles, largely natural, provided that communication of content,
approach and motivation is effective.
Curriculum
You may or may not be able to change the content of a module. The standard of that module
may need to be maintained, though the standard attained is not always the standard
intended. There may be knock-on effects on other modules; omitting half the material may
be a route to short-term popularity only!
The order of the modules within a programme may be significant, and of topics within a
module if these depend on or are required for material in other modules. Links to other
material may need to be made explicit, and pre-requisite material may need at least to be
explicitly identified, and perhaps briefly revised (on a handout?).
Assessment may be worth considering. A slight change of wording can make an exam
question (answered under pressure) very much more difficult or easier. A change in
assessment pattern can cause havoc - expectations on both sides must be clear. Coursework
assignments may cause confusion, partly because they are used for several purposes: to
encourage regular work, to help students pass exams, to assess skills or material not covered
in exams. The organisational skills required of students to manage several overlapping
assignments, and to balance them against formal teaching and other personal and perhaps
caring responsibilities, etc. may in some cases detract from the intellectual effort they should
devote to their studies.
Page 3 of 8
Document content updated in: November 2006
D:\106754530.doc
5. Designing your own questionnaires – hints and some factors to
consider
Designing a new questionnaire from scratch can be a complex and time consuming business.
The web-based University Questionnaire Service provides a number of short cuts to the
process through the provision of some complete standard questionnaires. It also makes
available a pool of questions divided into categories that can be selected and modified, if
necessary, in devising your own questionnaire.
However, even if you use a standard questionnaire or the pool of questions you do need to
be aware of some principles of questionnaire design:
 Be clear about the purpose of your questionnaire - to whom will the information
be accessible (e.g. the lecturer, the students or an external body) and how will it
be used (e.g. to improve teaching or to assure quality)? Think carefully about
whether you wish to include questions about elements of the module that you
have no control over or you may not be able or willing to change (e.g. your
teaching style)
 How large is the group you are investigating? How well do you know them?
 Consider what type of group it is (e.g. Stage 1 or Stage 3, mostly mature or not,
option or compulsory subject) and decide what level and kind of response you
can expect from this group (maturity, critical faculty, motivation). You should
then use questions which are relevant to the module, programme or student
group in question (e.g. an issue that was brought up in a previous year's staffstudent committee meeting) in order to provide evidence to justify the current
provision or to make recommendations or changes.
 Do you want specific information (e.g. topics found difficult, books used) or
general (presentation)?
 Do you want a high response, or is detailed comment more important? For the
latter a web-based questionnaire is better. If you want high response rates, then
designing a one-off paper-based questionnaire with just a few questions – using
the Blackboard ‘pool’ as your starting point – and delivering this questionnaire
during a lecture, can be a more straightforward solution.
 Is this a formative survey (part-way through the module) or summative (at the
end, for information or future change)?
 Will this be a one-off or a repeated survey?
 Ask students questions about their own effort (e.g. percentage attendance of
lectures).
 How will you inform the respondents of the results and any outcomes?
 How much are you prepared to do in response? If you are trying to improve your
teaching, do not be afraid to use questions which you expect may yield a low or
widely spread rating as these will provide you with the most information. It is
good to be aware of any problems concerning the external measurable aspects of
your teaching (such as organisation), but the motivation for real teaching
improvement comes when you have feedback on the deeper issues which
determine its effectiveness (e.g. confidence or engagement).
 Open questions can be complementary to the structured dialogue of the 5-value
response questions. Use open questions which relate to the 5-value response
questions that you are most interested in.
Above all your questionnaire should be short. Research shows that long questionnaires are
less likely to be completed. It is far better to prioritise a few areas than try to cover
everything.
Page 4 of 8
Document content updated in: November 2006
D:\106754530.doc
6. What is a 'good' question?
Questions
Questions may be specific or general, aimed at obtaining information or feedback as
described above. However, they may also be used to orient the respondent to the next
question, or to the whole remainder (see 3. above).
Using two questions that are likely to be closely correlated (e.g. 'Pace of lectures' and
'Difficulty' or 'Amount of material') may be a waste of a question. However, it is also possible
that asking several overlapping questions may encourage the student to think harder about
the issue, so that the second answer is more reliable than it would have been. Some
psychometric questionnaires ask many questions, but ignore some responses in forming their
final assessment, presumably for this reason.
Clearly a good question is unambiguous and easy to interpret (unless, conceivably, it is
designed to make the student think - see previous paragraph). (For example, don't ask: 'Did
the module emphasise thought and discussion, or recall of facts? Yes / No'). There are also
questions that the student finds rather difficult to answer accurately (e.g. 'how many hours
per week do you spend on this module?' which may be very variable, and will certainly be
unrecorded!).
A good question is also one that elicits a range of responses. Two or three (realistic) options
may be appropriate, but five will usually produce a more interesting result, especially as
many students avoid the extremes. It's best to label the extreme responses in a 'mild' way for
this reason. Use 'poor' rather than 'bad' and 'very good' rather than 'excellent'.
Otherwise, there is no simple answer. The pool of questions available through the University
Questionnaire Service offers a large number of questions that more than a few lecturers have
found useful, though all can be modified as you wish. However, there are questions that
almost always produce the same answer (e.g. 'would you like more handouts?' and 'how
much did you use the recommended textbooks?' get predictable answers from Stage 1 and 2
Engineers!) and so may be not very useful for your module.
Open-ended questions can be very illuminating, but may be best asked after some setresponse questions which firstly deal with predictable, routine comments, and secondly may
clarify for the student what they wish to say. To reduce the time and effort for the student,
and produce constructive feedback, questions like 'Suggest one feature of this module that
could be improved', and 'Which topic did you find most difficult?' can be useful.
Scales
Most of the questions in the question pool provided are on a five-point scale. While research
differs on this and there is not one ‘right’ methodology, some people think that a four-point
or six-point scale is preferable, as it forces the respondent to express a preference (though
they can leave it blank, of course).
Some questions will have a Yes/No answer, or just three preferences. Consider whether this
is sufficiently discriminating.
It is doubtful whether having more than six points is really useful unless you are sure the
respondent has thought deeply about the subject!
Page 5 of 8
Document content updated in: November 2006
D:\106754530.doc
7. Interpreting your results
Most people look at the mean (the average rating for each question) first. It is worth bearing
in mind that in many circumstances, students respond positively, i.e. the average rating on
the 5 value scale for questions where 5 is the 'best' rating is often higher than 3. You can
also compare with your own results in previous years. Note:
 For most questions, '5' represents a 'good' rating, for others, '1' or '3' is good,
and for some, no single answer is 'good' (e.g. 'which school do you come from')
 There is usually a high correlation between ratings on different questions
 You may wish to investigate individual responses and consider removing those
that you feel to have been filled in with spurious values. Obviously, there are
dangers in taking this approach to managing your data and you need to think
carefully before discarding data.
You could try predicting the mean response values for each question. This will highlight any
unexpected results.
When interpreting the evidence of student feedback, some scepticism is called for. It is
unwise to act on the basis of a few extreme responses from dissatisfied students. You will
probably require more evidence than a single questionnaire before making substantial
changes unless a significant problem is uncovered. It is best to integrate responses to the
same questionnaire over a number of years. It is not good practice to make significant
changes to a module or programme only to discover the following year students calling for
those changes to be reversed.
Negative feedback may result from some students who do not take the exercise seriously or
who have a grievance which they choose to vent indirectly. It should be made clear that
complaints against lecturers should be made in person to them or to their head of school.
Unsubstantiated accusations should not be accepted anonymously. It is also important not to
break the confidences relating to information received about individual staff by discussing
student opinion of those staff in open committee. Comments about individual staff must be
anonymised before the analysis of student opinion is discussed in any forum other than oneto-one with the member/s of staff concerned.
Apart from means, you can look at the shape and standard deviation of the distribution:
 A flatter than average distribution may indicate that students did not understand
the question properly or were not in a position to answer, or it may indicate a
genuine diversity of opinion
 A bimodal (bath tub) distribution may indicate two separate groups of students in
terms of this opinion
In either case, it would be worth analysing answers to this question against, say, student
background.
Simple statistical techniques can be applied to your results. A good place to start is to
determine the significance level for the average response against a compiled average (such
as the school average) for a particular question (refer to any standard text on basic
statistics). This is a measure of how peculiar your result is. Most other statistical techniques
will require the raw data and should only be used with experience or if there is a definite
objective in mind.
Where a problem seems to be indicated by a certain question, light may be thrown on it by
comments or answers to any relevant open questions.
Page 6 of 8
Document content updated in: November 2006
D:\106754530.doc
8. Further guidance and ideas
As you might expect there has been a plethora of research into the design, use,
interpretation and effectiveness of student questionnaires. However, rather than provide an
extensive bibliography we suggest that a good starting place, should you wish to delve more
deeply, is a report produced for the Learning and Teaching Support Network in 2004. This
comprehensive guide also provides routes into the literature. The full reference is given
below.
However, do not overlook local sources of support. Within your School you will certainly have
experienced users to turn to for advice, school or faculty teaching and learning committees
may provide further sources of advice. QuILT has developed some expertise in this area and
may be able to provide help (222 5565).
Finally, it seems appropriate to conclude with some reservations concerning questionnaire
use that are important to keep in mind when interpreting the results they provide. One
criticism often applied to evaluation questionnaires is that they are usually retrospective and
summative; this ‘autopsy’ model of quality is not much help in making changes to a module
while it is in progress and it serves to highlight the limitations of questionnaires in generating
an ongoing dialogue with students. The second issue relates to how much the students
should be regarded as the ultimate arbiters of the quality of a programme. It can be very
difficult to evaluate a module at the point of delivery and there can be a tendency to focus
upon its immediate value from the point of view of presentation rather than its ultimate
utility. Both these reservations highlight the tendency of questionnaires to generate from
students criticism rather than a more healthy critique of the modules they are taking. In this
sense it could be argued that there is sometimes a disjunction between our pedagogy which
is seeking to encourage critical reflection upon the subject matter that is taught and the use
of questionnaires that frequently focus too much on criticism. One way to overcome this is to
gather better qualitative data and it is this thinking that underpins, in part, this University’s
move towards web-based questionnaires.
Questionnaires should not be regarded as providing the definitive statement about module
quality. Frequently, the reason for poor ratings will need to be explored in more detail in
other fora – including staff-student committees or formally constituted focus groups.
A useful preamble on module questionnaires, particularly if you are creating your own paperbased questionnaire, is:
“In reflecting upon your learning, it would be helpful to have your views on the module you
just completed. The information will be considered by the module leader and will be
discussed at the Board of Studies and reported to the Staff-Student Committee. Action plans
will be made available to all students. Your comments should be constructive and truthful and
will remain anonymous.”
If you are using Blackboard, you should look at the instructions on the QuILT website at
http://www.ncl.ac.uk/quilt/modevaluation/
Brennan, John and Williams, Ruth (2004). Collecting and using student feedback – a guide to
good practice. A report produced for the Higher Education Academy (formerly the Learning
and Teaching Support Network) and downloadable at
http://www.heacademy.ac.uk/resources.asp?process=full_record&section=generic&id=352
Dr Steve McHanwell, Dr Robin Humphrey and Dr John C Appleby
Page 7 of 8
Document content updated in: November 2006
D:\106754530.doc
NOVEMBER 2006
Page 8 of 8
Document content updated in: November 2006
D:\106754530.doc
Download