Evaluating what creates and sustains quality learning in placements

advertisement
Evaluating and reporting at a distance: quality experiences with cost
effective web supported evaluations
Dr Susan Shannon
Spencer Gulf Rural Health School
The University of Adelaide and The University of South Australia
susan.shannon@adelaide.edu.au
Abstract: In 2002 and 2003 as the evaluator for two Commonwealth Department of Health and Aging initiatives I
conducted evaluations designed to elicit information about what creates and sustains a quality learning environment during
off-campus placements for students. The evaluations revealed that both for senior public health undergraduates from 4
Universities undertaking a semi-structured six week placement in private and public health settings around Australia to
conduct research projects, and for senior medical students undertaking structured 26 week rural clinical placements in the
Spencer Gulf Rural Health School, the quality of the supervision or preceptoring is paramount in creating and sustaining a
quality learning environment.
This paper considers the quantitative and qualitative evaluation processes engaged to evaluate the impact of these pilot
placement initiatives. The evaluation plans included conducting online surveys, online discussion boards, email evaluation,
paper-based surveys which were “read” and analysed using optical mark recognition software, as well as the more
traditional face-to-face interviews and focus groups. The evaluation results were reported back to various stakeholders
including students, preceptors, heath services, Universities and the Commonwealth using emailed reports and video
conferencing with associated “Smart Board” graphics. The findings were also disseminated publicly at the 34 th Public
Health Conference of Australia (September 2002) and ANZAME (July 2003).
The focus of the paper is on the way in which the web in particular has enabled an evaluator in Adelaide to cost effectively
conduct and report evaluations around Australia using methodologically sound quantitative and qualitative evaluation
processes which withstand scrutiny for publication.
Keywords: evaluation, web-supported, quantitative and qualitative methodology; cost effective
Introduction
The introduction of web-supported learning: a major educational change in Australia
Whilst the global context of higher education is being shaped by the massification of higher education, the
globalisation of the knowledge economy, and the internationalisation of education, the most substantial local
change in education in 10 years has been brought about by the impact of information and communication
technologies, with all that implies for students, teachers, administrators, funders and perhaps lastly, but
importantly for this context, evaluators. Whilst familiar with the concept of engaging the web to extend the
learning environment there is perhaps less familiarity with the use of the web to extend educational evaluation in
all its roles (see, for example, Sheard, Miller, Ramakrishnan, and Hurst, 2002; Hagan, 1997; Lowder and
Hagan 1999). Evaluating the web-supported evaluation practices I have engaged during the past year is the
topic of this paper.
Is there an inevitable tension between ‘web-supported’ and ‘evaluation’?
There is a perhaps inevitable tension between ‘web supported’ and ‘evaluation’ in considering using the web to
conduct educational evaluation within the University sector. This tension may derive in part from what may be
seen as a partial ‘deskilling’ of the formerly dedicated evaluation workforce through ‘upskilling’ other staff to
conduct evaluations (Shannon, 2003). Developing evaluation programmes, and thereafter evaluation instruments
and strategies has traditionally been the preserve of the University’s professional evaluation staff (Sadler, 1999
in DEST 2002). With the advent of the web, opportunities to extend this formerly centralised process are more
than ever being dispersed to other academic stakeholders: perhaps to the academics who conduct courses or
programs, or external evaluators engaged to report independently on the efficacy of new courses and programs,
and administrators including technical management of the online learning environment who are seeking
information on utility often to resolve technical interface issues for users. By no means is the web essential to
dispersing evaluation roles, but the simple, inbuilt web-based survey tools which are universally available to
teachers and evaluators alike may encourage a desirable view of educational evaluation as formative, and as an
action-research, “in-classroom” activity, rather than primarily centralised, summative, and as a means of
accrediting teachers, or auditing courses and programmes.
Do staff evaluate educational interventions?
Do teachers evaluate new interventions in learning and teaching to ascertain their impact on students’ learning?
The evidence from a 2003 University of Adelaide research project which surveyed academic staff to ascertain
the factors impacting their decision to adopt or not adopt online teaching as a part of their academic activity
(Shannon and Doube 2003a) would suggest that many do, but a substantial number do not. Respondents who had
used web-based teaching were asked whether this use had benefited their students overall. Amongst staff who
had adopted web-supported teaching tools, 26.5% did not know whether the impact of that adoption was
beneficial to their students or not. And yet included in the tool they had adopted for their students was the ability
to conduct simple staff-directed surveys to assess that impact through students’ logging – their attendance
(presence on the web-site), their use of self-directed assessment tools - and through staff established, studentcompleted evaluations of students’ opinions of the impact of web-based learning on their communication skills,
working in groups, time management, independent learning, IT skills, enjoyment whist learning and discipline
area knowledge base amongst other key graduate skills.
However, the most prevalent concern of staff (who had adopted web-based learning) which restricted their
development of more advanced uses of web-based learning, concerned their lack of time. This may suggest that
simple, web-based survey tools, which are analysed and reported virtually automatically, could be of benefit to
staff in determining where, and when to place their teaching effort, and to assist in fine-tuning their courses.
Conversely, this evidence also suggests that despite the presence of such tools, they are not being utilised for
evaluation.
Is evaluation of a complex learning environment that simple?
Probably not. The DEST Review 2002 suggests that “debate still rages about whether and how such [SELT]
evaluations should operate” saying that “The evaluation of teaching involves a lot more than judging the surface
features of a presentation to students, or simply polling the students for their reactions. Student evaluation is an
important indicator, but students are in some senses uncalibrated instruments. Comprehensive evaluation of
teaching involves getting to the heart of how and what students learn, and organising the circumstances and
resources so that effective learning takes place (Sadler, 1999, in DEST 2002 Point 120).”
This paper is premised upon a belief that web-supported evaluation techniques may be one means of developing
further uses of evaluation as a means of establishing and maintaining effective teaching practices for already
busy academics. The paper describes three useful roles for educational evaluation, and showcases the websupported evaluations which have enabled those evaluations to take place in two recently conducted evaluations
cited here as Case Studies – one from Public Health, and one from Medicine.
Useful roles for educational evaluation
Bob Smith (2003) describes three distinct roles for educational evaluation:
“1. Evaluation as the comparison of performance with standards
Evaluation is the process of (a) agreeing upon program standards, (b) determining whether a discrepancy exists
between some aspect, and (c) using discrepancy information to identify the weaknesses of the program (Provus,
1973)
2. Evaluation as information gathering for decision-making
Evaluation is the process of delineating, obtaining, and providing useful information for judging decision
alternatives (Stufflebeam, 1973, 2003)
3. Evaluation as participation in critical dialogue
Evaluation is the process of marshalling information and arguments which enable interested individuals and
groups to participate in the critical debate about a specific program (Kemmis, 1982)”.
In the two cases studies from which the data for this paper have been selected, there were evaluation activities
which addressed all three of the purposes for conducting educational research (See Tables 1-3).
Public Health Case Study 1
The first case study concerns an evaluation of the National Undergraduate Public Health Internship and
Scholarship Program, December 2001 to February 2002 which was a piece of contracted evaluation awarded to
the Evaluation Program of the Learning and Teaching Development Unit of the University of Adelaide. The
author conducted the evaluation between March 14 - May 31 2002 using quantitative and qualitative websupported methods.
The National Undergraduate Public Health Internship and Scholarship Program (NUPHISS) Consortium is a
group of four universities who together received funding from the Commonwealth Department of Health and
Ageing to establish a national program of vacation scholarships and internships for senior undergraduate public
health students. The members of the Consortium are:

Department of Public Health, University of Adelaide (lead University)

School of Public Health, Curtin University

Department of Public Health, University of Western Australia

School of Public Health, Queensland University of Technology
The evaluation task was to evaluate to what extent, and how, the stated aims of the inaugural vacation
Scholarship program were met for the various stakeholders. The aims were to increase the responsiveness of
undergraduate programs in Public Health to the needs of employers and to build a collaborative approach to
undergraduate education in public health between the providers of public health education and health industry
employers. The stakeholders were identified as being the Commonwealth funding body, the 4 University-3 State
NUPHISS Consortium, academic supervisors, agencies accepting placement students, agency supervisors and
students, both participating in the vacation 2002 and future participants (Braunack-Mayer, 2002).
As the dispersal of participants all over Australia largely precluded face-to-face interviews and focus groups,
online survey tools and a discussion board were utilized to conduct the evaluation. The results were analysed and
reported by participating stakeholder group: participating students, participating agency supervisors, and the
participating academic supervisors. They were presented both as “thick” data, rich with the voice of the
participant, but with supporting quantitative data and relevant statistical analyses. Recommendations arising
from the evaluation were appended.
Table 1 Evaluation Methods for Case Study 1 Public Health students’ 6 week placements
Role for educational
evaluation
1 Evaluation as the
comparison of performance
with standards
2 Evaluation as information
gathering for decision-making
3 Evaluation as participation
in critical dialogue
Measurable outcome
Transformative
Action/Reflection
A student completed
Evaluations described
project and project
that no output or
report as output from the inadequate, late, or
6 week placement
incomplete output
evaluated as a measure
related largely to the
of a well functioning
qualities of placement
placement process
experience, the
preceptoring and the
whether there was a well
defined project brief
Successful placement
Recommendations about
site: Detailed description creating successful
the qualities of
placement opportunities
placement sites, and the were presented based on
activities of supervisors evaluations
and work places in
managing placements.
Participation in critical
dialogue about the
continuation of the
placement program
beyond the pilot–
dialogue between
Commonwealth, 4
Universities in the
Consortium, academics,
and placement providers
– public and private
institutions.
Recommendations
developed about ways to
increase meaningful
dialogue between
stakeholders
Which evaluation
method was used
Web-based Survey of
students, academic
preceptors and clinical
preceptors to ascertain
how projects and
preceptoring were
structured by academic
and clinical supervisors.
I. Web-based Survey of
students, academic
preceptors and clinical
preceptors to ascertain
what the qualities of
successful placement
sites, as sites for
professional learning,
were.
II. Students’ Discussion
Board with evaluator
III Post-placement
Interviews with
academic preceptors
FG with Academic staff,
virtual FG with students
(asynchronous) and
web-based Survey of
students and clinical
preceptors.
Medicine Case Study 2A: 5th year Students
The second case study concerns evaluation of the learning experiences for the pilot group of ten students
participating in the Spencer Gulf Rural Health School 5th year long placement 2003. The Commonwealth
Department of Health and Ageing has established ten Rural Clinical Schools throughout Australia in an effort,
longer term, to address rural medical workforce issues. Each Rural Clinical School is directly related to an
existing urban medical school – for example The University of Adelaide Medical School with the Spencer Gulf
Rural Health School (SGRHS) which itself is a joint University venture with The University of South Australia.
In the short term, the Rural Clinical Schools have each contracted to provide 50% of their University medical
program’s existing clinical training to 25% of eligible HECS funded (local) students by 2004. In the specific
University of Adelaide case, that translates into potentially 24 students spending some of their 4th, 5th and 6th year
of study in a Spencer Gulf Clinical Learning Centre. In the pilot year 2003 these learning centres were located in
Port Lincoln, Whyalla, Port Augusta, Port Pirie, and Booleroo Centre. Fifth year was the focus of the pilot long
rural placements for 2003. The ten students in the pilot cohort rotated between the above learning centres every 6
weeks for a total of 24 weeks plus two weeks orientation.
The aims for the evaluation of the pilot program were not as clearly established in this project as Case Study 1
when I was first appointed the evaluator on January 1st, 2003. The stakeholders were the Commonwealth, the
SGRHS, The University of Adelaide, The University of Adelaide Medicine School, including the Curriculum
Committee, and the Medical Education Unit (as the student were being accredited with the same assessment
processes, tasks and required learning outcomes); the Spencer Gulf Community Advisory Board established by
the Commonwealth in funding the SGRCS; the clinical preceptors who were doing the teaching in the Spencer
Gulf Clinical Learning Centres, their practices, and patients, the academic staff of the SGRHS (many of whom
were also clinical preceptors), and the students themselves.
It was resolved quickly that evaluating the learning outcomes for students – how they learned in the Spencer
Gulf Learning Centres was the principal aim because students would participate in the same assessment
processes as the other (urban educated) students in their year level. As a measure of the short-term success of
this alternative (rural) learning environment, assessment outcomes sufficed. The task for evaluation was to
establish what created and sustained an effective rural clinical learning environment (as this was the premise for
establishment of the SGRHS) for students, and to report this to stakeholders. The formative goal for evaluation
was to shape the 2004 programme from the evaluations of the 2003 programme, and to provide on-going
evaluation data to inform immediate decision-making.
The nature of my fractional appointment, and the students’ dispersal throughout the Spencer Gulf, with
substantial distances between learning sites, largely precluded face-to-face interviews and focus groups with
students or clinical preceptors. Reliance on web-supported evaluations was suggested for an evaluation process,
as they aligned well with the focus on the weekly clinical common program delivery via web supported learning
resources to the students. Each Learning Centre, and accommodation unit or house (also provided by the
SGRHS) was equipped with internet and broad band equipped computers. In addition the Learning Centres were
all equipped with satellite enabled video conferencing with associated “Smart Board” graphics. I believed the
potential was there to exploit the opportunities this presented to conduct meaningful, personalised web supported
evaluations throughout 2003 with occasional face-to-face meetings on site when required. I travelled to conduct
an introductory focus group during the students’ orientation weeks in Minlaton – and this I considered essential
as I had not previously met the students nor introduced myself, and the goals for evaluation, established my
independence from their medicine program, my disinterest in their assessment outcomes, and my role solely as
evaluator. I met them face-to-face again in early August when they returned to Adelaide for assessment
purposes, at which time they joined a round-table to discuss their Spencer Gulf living and learning experiences
with interested prospective students currently in fourth year, and again in late October for a concluding Focus
Group. All other evaluations were conducted from my office in Adelaide.
Table 2 Evaluation Methods for Case Study 2A Medicine students 26 week placements (5 th year)
Role for educational
evaluation
1 Evaluation as the
comparison of
performance with
standards
2 Evaluation as
information gathering for
decision-making
3 Evaluation as
participation in critical
dialogue
Measurable outcome
Students sitting same
summative assessments as
other students in cohort.
Curriculum “gaps”
identified through weekly
medicine teleconferences
with academic clinician
and “explained” through
qualitative evaluations.
The qualities of effective
clinical preceptors and the
input of staff and
processes in clinical
settings to creating and
sustaining an effective
learning environment
were elicited.
Enabled critical dialogue
between SGRHS and the
Commonwealth about the
delivery of a quality
learning programme.
Transformative
Action/Reflection
Evaluations led to
curriculum development,
student support and
making same learning
materials as urban
students had access to
through teleconferencing,
video conferencing and
videos.
Evaluations led to staff
development and
refocusing of goals for
effective preceptoring
Which evaluation method
was used
Emailed evaluation
questions (4 per month)
arising from dialogue
with medical curriculum
development staff.
Evaluations were focused
on quality of the learning
environment, not an audit
of numbers. Refocused
goals of SGRHS.
Emailed evaluation
questions led to
compilation of students’
reflections “thick with the
voice of the participant”
for all stakeholders.
Emailed evaluation
questions to students.
Medicine Case Study 2B: 1st and 2nd year Students
In 2003 as a recruitment strategy for the long rural clinical placement, a Rural Week programme was extended to
all first and second year students who participated for a week in either April, July or September. The 1st year
students were allocated to sites at Minlaton and Whyalla. Second years were placed (by self-selection) in
Minlaton, Kadina, Clare, and Whyalla, as well as Iga Warta, near Neppabunna, east of Leigh Creek. At the end
of the week students participated in focus group debriefing and completed site and program specific evaluations
for the learning program in which they had participated. The Aims of the Rural Weeks were clearly established:

Aim of First Year Rural Week: To introduce students to community life, medical, and multi-disciplinary
health service delivery and future educational and career opportunities in rural areas. Students will also
be introduced to indigenous culture and health care needs.

Aim of Second Year Rural Week: To provide students with further consolidation and experience through
direct participation in a learning program designed to enhance learning and skills development in rural
clinical practice and health service delivery in acute and community rural and remote settings.

Aim of Iga Warta Rural Week: To engender in students a respectful knowledge of issues relating to
aboriginal culture.
Table 3 Evaluation Methods for Case Study 2B Medicine students 1 week rural placements (1st-2nd year)
Role for educational
evaluation
1 Evaluation as the
comparison of
performance with
standards
2 Evaluation as
information gathering for
decision-making
3 Evaluation as
participation in critical
dialogue
Measurable outcome
Pre- and post attitude
testing on Instrument
showed some significant
shifts in key attitudes to
rural placements and
readiness
Post visit Student
Evaluation of Learning
and Teaching revealed
impact of site specific
activities linked to Rural
Week Learning and
Teaching student
satisfaction
Rural Weeks conducted
for 2003, planned for
2004.
Transformative
Action/Reflection
Focus Group conducted
on site by evaluator to
tease out dimensions of
gender in attitude shift
Action-Research model
adopted for developing.
trialing, evaluating and
modifying activities for
further trailing for 3
discrete Rural Week
programmes per annum,
at multiple sites.
Planning for 2004 related
to evaluations from 2003.
Which evaluation method
was used
Paper based Surveys
emailed to site for local
administration– analysed
on SPSS onsite in
Spencer Gulf by
statistician
Paper based Surveys
emailed to site for local
administration– analysed
on OMR in Adelaide by
LTDU and results as pdf
files (includes open
ended) emailed back to
sites.
Compilation of SELTS,
students’ debriefs on site,
preceptor evaluations.
Tools and Techniques
Case Study 1: Public Health Students Scholarship Placements Scheme
vGallery was used to develop and administer the online surveys to students, academics, and clinical preceptors
involved in the NUPHISS pilot project. vGallery (virtual Gallery), is an online exhibition, assessment and
feedback system which can be configured in a confidential online evaluation (survey) mode It was developed by
Woodbury, Roberts and Shannon (2000, 2000a). The survey tool is configurable by the curator of the vGallery.
Vgallery surveys can support quantitative instrument question types: Likert preference scale, multiple choice,
ordering, and qualitative types short answer, and essay (Figure 1). Three separate evaluation vGalleries were
established with individual pass-word protected access: one each for students, academics and clinical
supervisors. The url of the devoted vGallery Survey site was emailed individually to the students, academic and
clinical supervisors involved in the Survey (Figure 2). The interface is shown in Figure 3. Submission is easy
(Figure 4) and the survey results are emailed back to the Curator as a text delimited file (Figure 5). The files can
be imported into excel for analysis (Figure 6) and reporting (Figures 7.8).
The advantages of the vGallery Survey tool for conducting evaluations are

Professional appearance;

Speed of feedback from respondents and

Reporting to Excel automatically conducted;

No data keying by evaluation personnel;

Evaluation time can be spent on evaluation, ie understanding what data means rather than gathering data;

Good response rates (students 89%; academics 100%; work placement supervisors 68%)
The disadvantages are that

universal web access amongst respondents is required, and

keyboard familiarity is assumed or otherwise long answer questions will not be as fully answered as in a
face to face encounter. One only respondent (a work placement supervisor) requested a paper –based
survey instrument.
The results of the evaluation revealed that the most substantial impact on the success of the placement was
brought about by the personal and professional welcome of the supervisor; orientation to the workplace and to
the roles of other staff; clear problem definition for the allotted project and realistic resources, for completing
that project. Social and professional integration to the workplace was a highly valued attribute of the placement
by students. The evaluation results were published to the stakeholders and presented at the 34th Public Health
Association of Australia annual Conference, Adelaide 29 Sept – 2 October 2002 in a session “Early Outcomes
from National Public Health Vacation Scholarships” (Shannon and Braunack-Mayer, 2002)
Figure 1 (above ) Figure 2 (below) vGallery Survey interface with completion instructions
Figure 3 (above) and Figure 4 (below) The Survey Interface and Survey Submission page
Figure 5 Emailing of collated survey returns are ordered on demand by the curator
Figure 6 Text delimited files imported into Excel to form a database for analysis and reporting
Figure 7 Reporting 5 point Likert Scale answers and constructing Histogram from Excel
Case 2A: 5th Year Medicine Students
In the Spencer Gulf, the lack of stability of the “Smart Board” and Video conferencing between multiple sites
prevented the envisaged monthly face-to-face synchronous Focus Groups between students in dispersed sites and
the evaluator in Adelaide. When operational, SmartBoard allowed integration of video-conferencing between
sites and Adelaide where it was available in a Lecture Theatre and a Tutorial Room at the Medical School
(Figure 8). Students had real time access to sight and sound of each other at 5 sites, and the instructor or
evaluator in Adelaide. At the same time a modified whiteboard – the so-called “SmartBoard” allowed
transmission of any digital kind – presentations, documents or diagrams to which all sites could contribute.
However, the ideal of conducting evaluations dynamically through this technology was not realistic as the
technology did not perform adequately and reliably on more than a handful of occasions. Additionally, so
delicate was the set up and operation during the whole of 2003 that an IT operator familiar with Smartboard
needed to be present at all times. Even supposing the technology did operate as envisaged, the presence of an
“outsider” to the evaluator - evaluatee duo may have modified the (otherwise) frank responses of the students.
Consequently a reliance on email was preferred to the desirable use of face-to-face video technology or webdelivered surveys due to the small number of students. Each month 4 discrete questions were emailed to students
at their preferred email address by the evaluator (Figure 9). Responses were emailed by students (Figure 10)
collated and analysed by the evaluator, whereupon reaction by the curriculum and academic counselling team
followed. When specific evaluations were required, devoted additional questions were emailed and follow up
responses sought.
The reporting to various stakeholders took forms which acknowledged their needs. The Commonwealth Dept of
Health and Aging requested “thick data” in July about the students’ learning experiences – there were two
aspects to this feedback. First The Commonwealth evaluator travelled to Whyalla and conducted face-to-face
interviews, and thereafter requested that additional data was collated and emailed. The University (Faculty
Level) requested reporting of student satisfaction through their Medical Education Unit Seminar in September –
this evaluation feedback took the form of a Seminar presentation. The academic staff required evaluation data to
support decision-making – frequently this evaluation data was emailed to staff at regional sites. For practices and
clinical preceptors, prepared evaluation documents were emailed to their practices. Dialogue about the
evaluation proceeded by email. The community will be evaluated in an envisaged program evaluation for 2004
which will seek to measure the extent and depth of impact of the SGRHS presence on the community. Students
were engaged at several levels – feedback about the changes enacted through responses to evaluation were
relayed during teleconferences and in person. Additionally relevant evaluation reports were uploaded to the
Student Corner in the SGRHS website, and to the Rural and Indigenous Health Section of the Department of
General Practice site, for example “Living and Learning in the Gulf” is available to all intending students at
http://sgrhs.unisa.edu.au/studentcorner_frameset.htm.
The advantage of this predominantly email-based evaluation process is the ubiquity of email as an informal and
relevant communication protocol. Email was used by all participants and all stakeholders and both evaluations
and reports of evaluations were quickly and cheaply disseminated through this means. Student’s response rates
to monthly evaluations diminished during the year from 7/10 to 5/10 in comparison with their Focus Group
attendance when there was 10/10 attendance at all three face to face groups.
One disadvantage of this evaluation protocol is that keyboard familiarity is a prerequisite or otherwise long
answer questions will not be as fully answered as in a face-to-face encounters. Indeed it was during the
externally conducted evaluation of the evaluation process that students declared that they would have preferred a
mixture of emailed questions and telephoned questions. Another disadvantage is the patchy web access at
regional sites – students stated that the computers at Port Lincoln Hospital and their living accommodation were
simultaneously out of action with a need for a technician to travel there from Whyalla to install anti-virus
software etc as they were networked University computers. As students have access to Hospital computers and
the internet only through password protection, their access was interrupted for 4 weeks. Further, attachments
could not be uploaded at all sites, notably Port Augusta.
The results of the evaluation revealed that the most favourable aspect of the 26 weeks rural placement for
students was their increased clinical and procedural competence through “hand-on” learning, answering
questions and one-to-one tutorials, their realistic view of rural medical practice and lifestyles, and their greater
focus on the benefits and disadvantages of further rural educational and career goals. The evaluation results from
practices and preceptors revealed that students adapted to a variety of effective modes of clinical teaching, and
that students valued opportunities to interact professionally with doctors, other practice and hospital staff, and
consultants. The evaluation results to June 2003, were presented at the ANZAME (The Association for Health
Professional Education) Annual Conference 2003, Melbourne, Australia 4-7 July 2003 in a presentation titled
“What is the key to Quality Learning in Rural Placements for undergraduates?” (Newbury, Shannon, Jones,
Wilson, Lawless, 2003).
Figure 8 Video Conferencing and Smart Board Technology operating over 5 sites
Figure 9 Four discrete evaluation questions emailed per month
Figure 10 Students responded to evaluation questions intermitting into question text
Case 2B: 1st and 2nd Year Medicine Students
The 1st and 2nd year Rural Weeks were conducted during three different weeks of the academic year at multiple
sites on each occasion. One Adelaide-based evaluator could neither travel to all sites to conduct face-to-face
evaluations nor administer paper-based instruments simultaneously on site. Site based non-evaluation staff were
inducted to assist with the evaluation process. As they were typically also clinical or academic staff participating
in the Rural Weeks programmes, the students self-selected volunteers who administered the paper-based
evaluation instruments and returned them in a sealed envelope to their academic supervisor. Academic clinical
staff conducted face to face debriefing sessions with pre-selected questions at each site.
There was a generic, and a site-specific instrument (paired) administered to each student. Site specific
instruments were paper based surveys comprised of 7 point Likert responses and free text answers. Surveys
were imaged using a Canon DR3060 scanner, images were then analysed using ReadSoft Forms optical mark
recognition software. Selecting the questions is the role of the evaluator (Figure 11). Preparing the Survey form,
analysing and reporting were conducted by the evaluation program of the Learning and Teaching Development
Unit (Figures 12, 13). The resultant quantitative data was reported in summary form with the qualitative (free
answer) response returned to the requestor as a series of images of answer spaces (Figure 14). These pdf files
comprising analyses of Likert responses, and open ended responses, were then emailed to various learning
centres in the Spencer Gulf along with a discussion paper. The reporting to other stakeholders took various forms
which acknowledged their needs – typically summarising evaluation results through coding and analysis of
open-ended responses, and focus groups, to detail the reasons behind the Likert scale analyses. These responses,
to academic clinicians, hosting practices and indigenous cultural awareness providers, were also emailed.
The advantages of using the ReadSoft Forms optical mark recognition software were the potential speed and
accuracy of the automatic processing, and the ability to share open-ended answers with widely distant colleagues
at multiple sites without copying and transporting paper surveys to sites. Student response rates remained high
(up to 100%) throughout the year (novelty and minimal effort involved in completing X box surveys may be a
factor here).
Disadvantages of this evaluation protocol included its inflexibility regarding the forms of questions which could
be accommodated on the ReadSoft Forms optical mark recognition software by the Evaluation Program of the
Learning and Teaching Development Unit, and the necessity to involve them in preparing the Survey forms.
During the 2002 pilot of the ReadSoft Forms optical mark recognition software, any academic staff member
could prepare Surveys using the supplied ReadSoft Forms templates. During the 2003 roll out to the University
this flexibility was removed. Therefore the process was time consuming as the evaluator (or requestor) prepared
a set of questions for an intended student survey , relayed the questions to the LTDU, who prepared a Draft
Survey form for final approval by the requestor. When that approval was received, a final Survey Form was
prepared and emailed to the requestor (or the evaluator) by the LTDU. The only question types which could be
accommodated were 7 point Likert scale questions on an “agree”, “strongly disagree” anchor scale, and open
ended questions.
The results of the evaluations revealed that students in both 1st and 2nd year who participated in the Rural Weeks
felt comfortable with the experience of shadowing a rural health professional, and that the health professional
accommodated their presence. First year students particularly welcomed the ambulance sessions which gave
them the opportunity to understand the role of community ambulance volunteers, and the equipment used on
ambulances. Second year students who selected the Cultural Awareness rural week visiting the Iga Warta
indigenous community at Neppabunna reported that they learned about aboriginal culture, that they felt
comfortable and welcome in the community where there were many opportunities to ask questions and interact
with community members.
The evaluation results to September 2003, were presented internally at the Medical Education Research Seminar
(Sept 24th 2003) and the Spencer Gulf Rural Health School Strategic Operational Review Meeting (20th Oct
2003) and will be presented at the Gender Conference December 5-6th (Monash-Adelaide-Flinders) and will be
prepared for publication externally in the Australian rural health or rural medicine press after the analysis of all
2003 results is completed.
Figure 11 Likert Questions are selected by the evaluator
Figure 12 (above) Likert Scale questions as Surveys and Figure 13 (below) analysed and reported
Figure 14 Open-ended questions reported as text images
Summary
There are multiple ways in which the ubiquity of the web can facilitate cost effective quantitative and qualitative
evaluations. This paper has showcased several tools and techniques, and provided a commentary on their
advantages and disadvantages. A summary is provided in Table 5.
Tool or Technique
VGallery web-based Survey
tool
Web based Discussion
Board
Advantages
Self-prepared
Professional appearance
Speed of feedback from respondents
Reports automatically to Excel
No data keying by evaluation personnel
Evaluation time can be spent productively
Good response rates
No Paper – nothing to lose
Supports any type of question
Interactive
Responsive
Group focused
Disadvantages
Universal Web access required
Key board familiarity is assumed
Survey form must be carefully
prepared to avoid glitches
All respondents must be enrolled as
vGallery users on University of
Adelaide PeopleSoft database to
access Survey (provided with a
username and password)
Asynchronous
Universal Web access required
Key board familiarity is assumed
Can be talking to an empty “room”
Email Evaluation Questions
Ubiquity of email as relevant and informal
communication protocol
Cheap
Requires no particular platform – hotmail
and private isp providers utilised by
students as preferred email addresses as
well as official University student email
No paper – nothing to lose
Optical Mark Recognition
Software Surveys
Response rtes remained high
OMR software collates, analyses and
reports digitally
Speed (potentially) and accuracy ensured
in reporting
Reports emailed
Reports include scanned open-ended
questions – no loss of data richness
University- supported system
Email access problems at some
sites
Attachments would not upload at
some remote learning centres
(bandwidth issues)
Response rates diminished during
the year
No automatic accumulation of
responses to database
Paper based – document storage
issues with originals
Current Software set-up only
accommodates some types of
questions
Need to involve LTDU in process
of preparing Surveys, analysing etc.
Table 5 Comparison of Web-supported evaluation tools and processes
In the Spencer Gulf Rural Health School the two programmes – the “Early Years Program” providing all
students in 1st and 2nd year with Rural Weeks in Whyalla, Minlaton, Clare, Kadina or Iga Warta, and the
“Clinical Years Program” providing 940 weeks of teaching in 2004 to 4th, 5th and 6th years at dispersed sites in
the Spencer Gulf at Port Lincoln, Whyalla, Port Augusta, Port Pirie, Kadina and Clare will continue. The
availability of web-based evaluation protocols which are reliable, valid and yet personal, are vital to conducting
an educational evaluation for the three outlined purposes:
“1. Evaluation as the comparison of performance with standards
2. Evaluation as information gathering for decision-making
3. Evaluation as participation in critical dialogue” (Smith 2003)”
for an Adelaide-based evaluator.
Conclusion
In 2004 the rural medicine evaluation continues for an increased cohort as the 4th and 5th year cohort grows from
14 students in 2003 to 56 students in 2004. As a result of trialing various forms of web-supported evaluation
tools and techniques, a decision has been made to continue paper based Surveys utilising Optical Mark
Recognition Software for first and second year Rural Weeks. For evaluating the longer placements of 4 th and 5th
year a combination of devoted web-delivered surveys and teleconferences or telephone calls (as proxy focus
groups) will be conducted possibly supported by online evaluation Discussion Boards.
Copyright
Susan Shannon 
This paper was first presented at the Evaluations and Assessment Conference; A Commitment to Quality,
Adelaide, November 24-25 2003
Acknowledgements
Bob Smith has discussed concepts of educational evaluation with me during 2003 as I undertook the Rural
Medicine evaluations..
References
Braunack-Mayer, Annette (2002) Brief to Evaluate, NUPHISS Consortium, The University of Adelaide,
February 2002, p.1
Dept of Education, Science and Training, (June 2002), Striving for Quality: Learning, Teaching and Scholarship
Report DEST, Canberra Accessed at
http://www.backingaustraliasfuture.gov.au/publications/striving_for_quality/pdf/quality.pdf
Hagan, Dianne (1997) Student feedback via the World Wide Web in ultiBASE Online Journal, June 1997
Accessed 06-11-03 at http://ultibase.rmit.edu.au/Articles/june97/hagan1.htm
Kemmis, Stephen (1982) Seven Principles for Program Evaluation in Curriculum Development and Innovation
Journal of Curriculum Studies 14 (3) 221-240
Kemmis, S (1993) Foucault, Habermas and Evaluation Curriculum Studies 1(1) 35-54
Lowder, Jason, & Hagan, Dianne (1999) "Web-based student feedback to improve learning" in
Proceedings of the 4th annual SIGCSE/SIGCUE ITiCSE conference on Innovation and Technology in
Computer Science Education, Cracow, Poland June 27 - July, 1999 Accessed 06-11-03 at
http://portal.acm.org/citation.cfm?id=305902&jmp=cit&coll=GUIDE&dl=ACM&CFID=11111111&C
FTOKEN=2222222#CIT
Provus, Malcolm (1973) “Evaluation of ongoing programs in the public school system” in Worthen, B., and
Sanders, J.(eds) Educational Eduction: Theory and Practice Ohio: Charles A. Jones pp 170-217
Sadler, Royce (1999) “Preaching teaching”, The Australian, Wednesday 7th July, p.37 in Dept of Education,
Science and Training, (2002a), Striving for Quality: Learning, Teaching and Scholarship Higher Education
Review Process: Backing Australia’s Future Publications, Dept of Education, Science and Training, Canberra
(21 June 2002) Accessed 30-09-03 at http://www.backingaustraliasfuture.gov.au/pubs.htm#2 Point 120
Shannon, Susan (2003) “Programme evaluation in CME” The Lancet Vol 362 September 27 2003 p 1084
Shannon, Susan and Braunack-Mayer , Annette (2002) Early Outcomes from National Undergraduate Public
Healtgh Vacation Scholarships Paper presented at the 34th Public Health Association of Australia Annual
Conference, Adelaide Festival Centre, Adelaide 29Sept – 2 Oct 2002
Shannon, Susan and Doube, Loene (2003a) “Factors influencing the adoption and use of web-supported teaching
by academic staff at the University of Adelaide”. Report prepared for the Deputy Vice Chancellor (Education) &
Provost, supported by a University of Adelaide Learning and Teaching Development Grant the University of
Adelaide, June 2003. The University of Adelaide Accessed 31-10-03 at
http://www.adelaide.edu.au/hr/development/academic/Final%20Report.pdf
Sheard, Judy, Miller, Jan, Ramakrishnan, Sita and Hurst, John (2002) Student Satisfaction with a Web-based
Anonymous Feedback System Paper presented at AUSWEB 2002: The Eighth Australian World Wide Web
Conference, Twin Waters Resort, Sunshine Coast Queensland 6 –10 July 2002 Accessed 06-11-03 at
http://ausweb.scu.edu.au/aw02/papers/refereed/sheard3/index.html
Smith, Bob (1999) “It doesn’t count because it’s subjective!”(Re)conceptualising the qualitative researcher role
as ‘validity’ embraces subjectivity Paper presented in the Advance Paper section of the AARE Annual
Conference, Adelaide, 1998
Smith, Bob (2003) Personal email communication 28th April; 2003
Stufflebeam, Daniel (1973) “An Introduction to the PDK book Educational Evaluation and Decision-Making” ”
in Worthen, B., and Sanders, J.(eds) Educational Eduction: Theory and Practice Ohio: Charles A. Jones pp 128142
Stufflebeam, Daniel (2003) “The CIPP Model of Evaluation” pp 31-62 in Kellaghan, T., Stufflebeam, D., and
Wingate, L., (eds) International Handbook of Educational Evaluation London: Klewer Academic Publications
Woodbury, Rob, Roberts, Ian and Shannon, Susan (2000), vGallery, Adelaide University Online 2000
http://online.adelaide.edu.au/vGallery
Woodbury, Rob, Roberts, Ian and Shannon, Susan (2000a) vGallery: Web Spaces for Collaboration and
Assessment, Australian University Teaching Committee, 2000 National Teaching Forum, Assessment in Higher
Education, Canberra 6-7 Dec 2000, on the WWW at http://www.autc.gov.au/forum/papers.htm
Download