Tracey Winning
Elaine Lim*
Grant Townsend
Dental School, The University of Adelaide
*Currently private practitioner, Melbourne tracey.winning@adelaide.edu.au
Abstract:
Dental students in third (n=35) and fifth years (n=50) at Adelaide and Trinity College Dental Schools were surveyed about their experiences of assessment and their perceptions of the importance of particular aspects of assessment.
Students (1) reported on their assessment experience within their programmes by describing a critical assessment incident and their response to it, and (2) rated assessment purposes and features using a 5-point Likert scale of relative importance. The students described a range of assessment methods, including group assignments, vivas, laboratory assessment, and problem-based learning tutorials, but written examinations/tests and clinical assessment were discussed most frequently. Negative assessment experiences were commonly noted. The two most frequently raised issues were lack of congruence between student and staff perceptions of performance, and not receiving adequate feedback. There were no significant differences between years or schools in students' ratings of the importance of assessment purposes. Overall, the students rated the provision of feedback on learning and motivation for learning as the most important purposes (>80%). Patient-based scenarios were rated as the most important method for judging students’ learning (>90%), whilst group-work tasks were rated of little importance. Overall, students rated clear requirements and feedback as the most important characteristics for positive assessment outcomes. Students' ratings of assessment purposes, characteristics and methods closely matched the features of good assessment practice found in the literature. However, their assessment experiences indicate the need for review of assessment in both schools, particularly related to student-staff expectations of performance levels, and the provision of feedback.
Key words: assessment; dental education; student perceptions; problem-based learning.
Problem-based learning (PBL) has been implemented in several Australian and overseas dental schools over the past decade (Mullins et al., 2001) and concomitantly some traditional assessment methods used in dental schools have been replaced with a broader range of assessment methods (Mullins et al., 2001; Davenport et al., 2001). The new methods are designed to assess achievement of the goals of PBL, including integration of basic and clinical sciences, development of clinical reasoning skills, and development of learning and group skills (Nendaz and Tekian, 1999). As in many other health professional programmes a major component of assessment for dental students involves regular assessment of students' management of patients in the clinic. Consistent with a PBL approach, clinic assessment emphasises the role of the student, requiring students to assess their own performance and learning needs (Wetherell et al., 1999). The new assessment methods have been developed to match the PBL processes and outcomes that students experience in weekly PBL classes and formative assessments. However, students continue to express concern about
‘how they are going to be assessed or demonstrate their learning’.
Assessment has been shown to drive the students’ views of curriculum processes and content (Boud, 1990;
Ramsden, 1992) and it has a major impact on the form and quality of students’ learning. Regarding the effect of assessment on learning or “backwash”, Biggs (1996, p8, italics in original) noted “it is the student's perception of the test, and of the demands that it is seen to make, that generate the effects of backwash”. The student’s perception of what different test formats require influences their approach to learning, which in turn shapes their levels of understanding (Ramsden, 1992; Biggs, 1996). Furthermore, students’ perceptions of assessment demands may differ from the intended aims and objectives of the curriculum (Biggs, 1996).
Therefore, “it is not what teachers believe assessment to be testing which governs students’ behaviour, but their own perceptions” (Boud, 1990, p107).
However, the literature on assessment in dental schools has been generated predominantly from an
‘educator’s’ perspective (eg, Davenport et al., 1998; Manogue et al., 2001). The student’s perspective is usually considered to a lesser extent and this is reflected by the sparse literature on dental students’ perceptions of assessment and innovative assessment developments. However, several recent articles have explored students’ responses to various methods of assessment (Sambell and McDowell, 1998; Lindblom-
Ylänne and Lonka, 2001), students’ perceptions of the purposes and fairness of assessment approaches
(Duffield and Spencer, 2002), and the value of different assessment tasks (Rees et al., 2002). These studies indicated that although students’ understanding of assessment aims may align with staff aims, they responded to the same tasks in different and complex ways in terms of their study methods and motivations
(Sambell and Mc Dowell, 1998; Lindblom- Ylänne and Lonka, 2001). Also, while there are differences in perceived purposes and fairness of specific assessment methods, depending on the year-level of students, all students wanted more feedback to guide future learning (Duffield and Spencer, 2002). Furthermore, students criticised performance-based assessments, expressing different opinions about who should be involved in
assessing their performance, but they still valued this format more than assessment that only addressed their theoretical knowledge (Rees et al., 2002).
To understand the effects of assessment on students and their learning, we need to instigate open dialogue with students about their perceptions of different assessment methods, particularly those used within PBL programmes, the demands these methods place on students, and how effective students perceive them to be in judging their performances (Boud, 1990; Davenport et al., 1998; Sambell and McDowell, 1998; Seale et al., 2000; Duffield and Spencer, 2002). Therefore, as part of monitoring assessment processes (Boud, 1990), this study explored students' perspectives of their experiences of assessment within two PBL dental programmes. The specific aims of the study were:
1.
To identify how students responded to specific assessment experiences that they selected and what they gained/learned from these experiences.
2.
To identify which purposes, features and methods of assessment students rated as important.
The students who participated in this study were from two dental schools with PBL curricula that have been running for at least five years: The University of Adelaide, Australia (Adelaide), and the University of
Dublin, Trinity College, Ireland (Trinity). These schools have both introduced new curricula in response to the increasing focus on student-centred learning in higher education and the constantly changing context of the dental profession today (Mullins et al., 2001; Kelly et al., 1999).
The characteristics of both programmes are summarised in Table 1. The length of both programmes and major objectives are similar. The Adelaide Dental School programme is considered to be a ‘hybrid’ PBL course, retaining some formal lecture time throughout the course (Mullins et al., 2001). Trinity, on the other hand, has altered its approach so that the majority of learning occurs in small group tutorial sessions, laboratories and clinics, that are supported by occasional lectures linked to the various themes of the programme.
There are similarities in the methods of assessment used in both schools, with a combination of both practical and written examinations, as well as continuous monitoring of clinical progress (Table 1). At
Trinity the clinical assessment is supplemented with objective structured clinical examinations (OSCEs).
Table 1 Summary of programme characteristics at The University of Adelaide and Trinity College Dental Schools.
Information
Course duration
Adelaide Dental School
5 year undergraduate, upon completion eligible for registration and subsequent employment
Entrance requirements
School leavers/ Tertiary- transfer students (approx 25%): Combination of academic results, Undergraduate
Medical and Health Sciences
Admissions Test (UMAT) score and structured interview
Number of students Approx 50 students per year
Assessment methods
(as outlined in the
Dental Schools’ programme outlines)
Written examinations: multiple choice, short answer, structured problems and essays
Tutorial participation and exercises, including PBL tutorials
Practical/laboratory assessments
Group project reports and presentations
Dental Clinical Practice
Assessment Portfolio: includes self-assessment and tutor assessment
Annual interview (viva)
Trinity College Dental School
5 year undergraduate, upon completion eligible for registration and subsequent employment
School leavers: Application through national system, selected on basis of academic results.
Mature age: Interview (no more than
2 students/y)
Approx 40 students per year
Written examinations: Multiple choice, short answer, structured problems and essays
PBL tutorial participation
Practical/laboratory assessments
Group project/reports
Clinical Competence Tests and
Clinical Credit Hours System
Objective structured clinical examinations (OSCEs)
Final dental examination where students review seen and unseen patients (viva)
The study population included students enrolled in the dental programmes at Adelaide and Trinity Dental
Schools. The sampling rationale was to recruit students who could compare the extent of alignment of their assessment experiences with their professional practice learning. Therefore, the sample comprised students from the third- and fifth-year levels of both programmes. Third-year students have completed approximately half of their programme and so have experienced a range of assessment methods within each programme plus at least one year of clinical work with patients. Fifth-year students have experienced the majority of assessment methods for their programmes and in their final year spend the majority of their time in clinic
with patients in preparation for independent practice on graduation. All participants were volunteers. The total numbers of participants in each group were: third year Adelaide, n = 17 (46%); fifth year Adelaide, n =
25 (41%); third year Trinity, n = 18 (45%); and fifth year Trinity n = 25 (62%), making a total of 85 students.
A two-part survey, was completed by students. Part 1 was an open-ended question and Part 2 was a series of scale response items. To reduce the possibility of identifying any students, demographic data, eg, age, gender, or previous academic experience, was not collected. Part 1 involved collecting students’ experiences of assessment using a critical incident approach. This approach has been used in educational research to focus participants onto a particular issue or experience that they believe is important (Brookfield, 1990;
Metcalfe and Matharu, 1995). It allows participants to express their ideas as freely as possible, with minimal direction in order to establish their own perspective. Students were asked to identify an assessment experience, either positive or negative, and describe the context of that experience, what they were thinking, and how they felt, during and after.
Part 2 of the survey asked students to rate assessment purposes, features and methods, by relative importance using a 5-point Likert scale ranging from very important (1), important (2), of some importance (3), little importance (4), not important (5). This section included statements derived from the literature that addressed (Rowntree, 1987; Boud, 1990; Ramsden, 1992):
the purposes of assessment (eg feedback to students to focus learning, motivation to learn, feedback about teaching, maintenance of standards); and
the features of assessment necessary for positive outcomes.
The final question asked student to rate the range of assessment methods, common to both programmes, in terms of relevance to their learning.
The survey was piloted with a group (6) of fourth-year students at Adelaide to check for ambiguities in wording and other difficulties students might have in following the instructions and/or completing the questions. From this trial, the clarity of the instructions was acceptable and the students did not experience any difficulties in completing the survey. It was determined students would take approximately 10-15 minutes to complete the survey.
Prior to or immediately after a lecture, third- and fifth-year students were briefed about the survey and its purpose. Students were informed that the surveys were anonymous and confidential and no attempt was made to record those who did/did not participate. Staff (TW, GT) involved in this project did not have any
current teaching or assessment responsibilities for the student groups who were sampled. The Adelaide students were asked to complete the survey during early November 2000, prior to a major examination period, whilst the Trinity students were asked in early December 2000, also prior to a major examination period. Students were advised about the suggested time to spend completing the survey, however, there were no time restrictions. A majority of students took approximately 20 minutes to complete the survey at both sites.
Part 1 - Critical assessment incidents: The survey responses were analysed by one author (EL), using a multistepped process (Minichiello et al., 1990) to derive codes and themes, and the analysis was checked by the co-authors. Initially, the responses were coded according to the assessment method(s) described by the student and then categorised into experiences described by students as ‘negative’ or ‘positive’. The next stage involved identifying the issues that were noted by students and deriving descriptive codes directly from the responses, noting similarities of experiences among students, and then refining these descriptive codes.
The codes were then sorted into key themes or issues. To check the credibility of the interpretation, the original responses were reviewed and the thematic analysis was discussed with the other authors. Vague comments or those that were more difficult to interpret were reviewed by all authors or used unchanged.
Part 2 - Importance of assessment features: The data for each item in this section were analysed by determining frequency distributions, percentages and rank order based on the percent of students who considered these items to be 'very important'. When the student's intention was unclear (for example, when more than one number was circled or no response was circled), the scores were not counted. The Kruskall-
Wallis test (corrected for ties; Ott, 1988) was used to compare the responses between each of the groups, ie, schools and years, with the significance level set at p< 0.01 due to the large number of comparisons.
Of the volunteers who returned surveys, the analysis rate for this section of the survey was: 93% third-year
Adelaide, 100% third-year Trinity, and 84% fifth-year Adelaide and Trinity students. Some students wrote about more than one incident. The level of detail provided in responses varied. Several students provided extensive detail related to their assessment experiences, describing issues they had encountered and how they had felt. However, some students provided only very brief information.
Assessment Methods
The assessment methods identified by students were grouped into written examinations/tests, clinic assessment including OSCEs, group assignments, vivas, PBL tutorials, laboratory exercises, or other/general aspects of assessment. Overall, incidents describing written examinations/tests and clinic assessment featured prominently in the students’ responses from both Dental Schools (Table 2). Both groups of thirdyear students commented on a broader range of topics compared with the fifth-year students who mostly noted incidents related to written or clinical assessments (Table 2). Overall, approximately a third of the comments were positive and the remainder were negative (Table 2).
Table 2. Positive or negative experiences of assessment methods as reported in critical incidents
Adelaide University Trinity College
3 rd Years
(n = 16)
5 th Years
(n = 21)
3 rd Years
(n = 18)
5 th Years
(n = 21)
Written Exams/Tests
(+/-)
10 (56%)
(3/7)
3 (14%)
(0/3)
7 (37%)
(3/4)
8 (38%)
(4/4)
Clinic assessment/OSCEs
Group Assignments
(+/-)
(+/-)
Vivas
(+/-)
Problem Based Learning Tutorial
(+/-)
1 (6%)
(0/1)
3 (17%)
(0/3)
2 (11%)
(0/2)
0
18 (86%)
(5/13)
0
0
0
9 (47%)
(5/4)
0
1 (5%)
(0/1)
2 (11%)
(0/2)
12 (57%)*
(5/7)
1 (5%)
(0/1)
0
0
Laboratory exercise
(+/-)
1 (6%)
(0/1)
0 0 0
Other/general assessment aspects
(+/-)
1 (6%)
(0/1)
0 0 0 n - Number of students who completed this section of the survey (+/-) - Positive/Negative experience
Numbers (percent) shown in table represent the number of comments noted – some students noted more than one incident in this section. * -comments also related to OSCE for clinical skills
The key themes or issues derived from the students' comments related to grading and mark allocation, the nature and influence of tutor interactions, the nature of feedback, experiencing assessment stress, and assessment organisation and expectations.
Grading
The most common assessment issue raised by students was receiving grades that were unexpected, generally lower than anticipated. This related to a lack of congruence between the students and tutors in perception of the student’s level of achievement, with students believing that they had been awarded lower marks than expected/deserved, eg, “when the tutor gave me 45% at the end I got a shock! I have never failed any subject
B4 (sic)”. This was an issue particularly for Adelaide third- and fifth-year students and Trinity third-year students and it was noted for both clinic and written assessments. A few students were surprised but pleased that they had received a higher mark than they expected for some assessments. Another related issue for students from both Dental Schools was a perceived lack of standardisation between assessors, especially in relation to what was considered to be an acceptable clinical procedure or performance, eg, “I don't know why
I got 50 and someone else with the same assessments got 60 at the end”.
Some students expressed concern at the weighting of certain assessment tasks, particularly the fifth-year
Trinity students who indicated that too much weighting was given to written examinations. For those students who commented on group assignments, the award of a common grade for all group members was deemed to be annoying and de-motivating as it was felt that non-contributors should not be rewarded equally with contributors, eg, “I felt let down and thought that this person should have received a lower mark”.
Influence of tutor interactions
All students indicated that interactions with their tutor had a strong influence on their experience of assessment in clinical situations, vivas, or PBL tutorials. Positive and negative tutor experiences were described. Positive interactions related to tutor characteristics such as being “encouraging”, “supportive”, and challenging and fair. On the other hand, negative interactions were associated with tutors who were
“intimidating”, and made discouraging, “indifferent” or offensive comments, occasionally in front of their patients. Some students suggested that tutor training and selection processes might be improved to assist tutors to develop skills for establishing a positive learning environment.
Feedback
The extent and type of feedback given to students was an issue related to student-tutor interactions that was a common feature in contributing to positive or negative assessment experiences for students. Students who experienced positive feedback experiences, especially in terms of their learning, were given timely
(“immediate”), constructive, and detailed feedback, which included strategies for improvement and helped them identify “gaps in my own knowledge and the motivation to study on my own”. Students who received negative feedback did not experience “valuable learning experience(s)”, and experienced assessment as disappointing and de-motivating. Students’ negative experiences of assessment, particularly written and clinic assessment, were also associated with a lack of feedback.
Assessment Stress
Not surprisingly, feeling stressed during written or clinic assessments was a memorable feature for students eg, “during the assessment I was very flustered as it appeared that I could not answer a lot of the questions very well which was worrying”. Stressful situations were associated with both external and student-related factors that increased the pressure experienced by students. External stress factors included time, eg, timerestricted assessment tasks, lengthy assessments (eg clinic sessions of 3-4h), needing to complete assessment as it was near the end of term, and unexpected assessment of clinical performance. Student-related factors included perceptions of high stakes or unfamiliar assessment tasks, difficult questions, topics/skills not studied for or focused on in a current unit, eg, “exams never seemed to be focused on what you thought you had to learn”, previous poor performance in similar tasks, or fear of failure.
Assessment organisation and expectations
‘Assessment organisation’ related to comments describing aspects of time provided for examination preparation, format, and content. The major focus of comments was on written assessments. For both thirdyear groups, adequate warning/preparation for written assessment was valued highly, and negative comments resulted when inadequate preparation time was experienced. Another related issue involved overlapping of assessment tasks, eg, “clashes in due dates”. As noted previously, negative experiences were associated with a perceived lack of clarity in assessment expectations, particularly in relation to exam format, question style, topics covered, and marking methods. However, the OSCE examination format was considered a positive experience by some students, as they perceived it to be “relevant to practical life as a dentist” by comparison with other styles of assessment. Other students noted satisfaction when their personal goals were achieved in clinic, or they had performed better than expected. This contrasted with students who expressed concern and distress that they were not achieving at their desired level.
There was no significant difference between the year and school groups’ ratings of importance of the different purposes of assessment (p>0.01; Fig 1). The majority of students rated provision of feedback from staff to assist in focusing students’ learning, motivation to learn, feedback to the teacher, and maintenance of academic standards as ‘very important/important’ purposes of assessment (Fig 1). No student indicated that feedback to students or teachers or motivation were ‘not important’. However, students gave lower ratings to the importance of ‘preparation for life-long learning’ and ‘selection for future courses’ as purposes of assessment (Fig 1).
Fig 1. Ranking of students’ perceptions of the importance of assessment purposes (based on the highest percentage of students rating each method as ‘very important’). No significant differences were found between the different groups of students.
There was no significant difference between the groups regarding the features of assessment perceived to be important for positive assessment outcomes (Fig 2.). Features included, the relative importance of clarity about expectations and objectives, feedback, coordinated timing of assessment tasks, achievability of high grades, assessment matching learning, self-evaluation, opportunities for redemption, and student input into assessment methods and processes. Reflecting the focus of comments in the critical incident section of the survey, students rated the majority of the items in the question as ‘very important/important’ for assessment, eg, clear expectations in terms of requirements and topics, feedback to students, clear objectives, coordinated timing of tasks, and opportunities to achieve high grades/standard. Less than 40% of students considered matching of assessment and learning, self-evaluation opportunities, and student input into how they are assessed as ‘very important’. However, it is clear that the majority of students identified all these features as
‘important’ or ‘very important’ in achieving positive assessment outcomes (Fig 2).
.
Fig 2. Ranking of students’ perceptions of the importance of various features of assessment necessary for positive outcomes (based on the highest percentage of students rating each method as ‘very important'). No significant differences were found between the different groups of students.
Except for viva assessments, there were no significant differences in responses between groups in perceptions of the importance of different assessment methods in judging their learning (Fig 3, Table 3). For all groups, examinations requiring students to interpret patient-based scenarios were perceived as being most important in assessing student learning (Fig 3), with no students indicating this assessment method was 'not important’. Group-work tasks and assignments were rated lowest, with similar percentages of students rating these as ‘important/very important’ and ‘little/no importance’ (Fig 3).
Fig 3. Ranking of students’ perceptions of the importance of assessment methods in assessment of their learning (based on the highest percentage of students rating each method as ‘very important’). No significant differences were found between the different groups of students for these methods.
More than half the students rated practical assessment as being ‘important/very important’ for assessing their learning and approximately half the students rated short answer/multiple choice type questions to be
‘important/very important’. Fifth-year Trinity students’ ratings of the importance of vivas was significantly higher than the other groups (p<0.01), with all other groups rating this form of assessment similarly (Table
3).
Table 3. Students’ perceptions of the importance of vivas in judging their learning.
Adelaide Trinity
Vivas 3 rd Years
(n = 17)
5 th Years
(n = 25)
3 rd Years
(n = 18)
5 th Years*
(n = 25)
‘Very important’/‘important’
65% 44% 66% 92%
‘Of some importance’
24% 28% 28% 4%
‘Little importance’/‘no importance’
12% 28% 6% 4%
* p<0.01
The students raised many positive and negative experiences of assessment in their PBL dental programmes.
Reflecting the predominantly clinical structure of their year level, fifth-year students focused on clinic assessment, while third-year students addressed experiences over a broader range of assessment activities.
The major issues raised in students’ recounts of their experiences of assessment were the amount and clarity of feedback, consistency of grading between students and tutors and between tutors, and clarity of expectations, which sometimes led to stressful experiences.
Students’ critical incident explanations of the positive effects of helpful feedback and the negative impact of inadequate feedback were consistent with their ratings of the importance of this aspect of assessment. These views are consistent with the literature indicating feedback as an essential part of effective learning
(Rowntree, 1987; Biggs, 1999) and views of medical students’ (Duffield and Spencer, 2002). However, the implication of the students’ negative assessment experiences associated with poor feedback, as noted in our study, is that we need to improve our practices in relation to providing consistent, learning-focused, and timely feedback (Crooks, 1994). This issue of inadequate feedback is not peculiar to Adelaide and Trinity; it was also noted as an area for improvement from a recent study of assessment values and practices of clinical staff in UK dental schools (Manogue et al., 2001).
As with ambulatory settings in medicine (Bowen and Irby, 2002), dental clinic sessions are characterised by time-pressured communications between students, tutors and patients, with limited opportunities for regular observation and feedback to students. Therefore, to improve this critical component of assessment and learning we need to improve the management of the time available in clinic and the use of standardised and explicit criteria, and also support clinical tutors to develop a positive learning environment (Manogue et al.,
2001). We are piloting a program for tutor development at Adelaide next year that will focus on learning and assessment in a clinic environment. Although examples of assessment from the clinic environment are complex and more difficult to obtain, we are developing video scenarios of student/patient and student/tutor interactions, supplemented with examples of technical work, that will be used in activities involving discussion and application of the assessment criteria and standards (Rust et al., 2003).
In addition to improving staff feedback to students, our efforts must also focus on supporting students to develop their own skills in self- and peer-evaluation/assessment while they learn. Clear communication between students and tutors and clarity of criteria and standards are essential components of effective assessment, including self-assessment (Wetherell et al., 1999; Rust et al., 2003). As noted by Wetherell et al
(1999), some students are in favour of self-assessment but others fail to see the cyclical relationship between learning and assessment and the role of self-assessment in developing their learning processes and outcomes.
These students are likely to benefit from activities involving application of criteria in conjunction with actual models representing a range of standards (Rust et al., 2003).
Students who are unable to critically appraise their own work through accurately applying criteria and standards may perceive they have been unfairly marked, especially if this occurs in conjunction with inadequate tutor feedback. Grading discrepancies, a major issue raised by students in relation to the majority of assessment tasks, was associated with differing student and staff expectations of acceptable standards.
When tutors’ grades/expectations were incongruent with the students’ own perceptions of their work this lead to experiences of disappointment and frustration, emphasising the need to assist students to develop effective self-assessment capabilities (Rust et al., 2003). Frustration with grades may also be linked to the perfectionist tendencies of some dental students, and their desire to achieve high marks (Sanders and
Lushington, 1999). This is supported by the finding that the majority of Adelaide and Trinity students associated positive outcomes with opportunities for achieving high grades or standards.
Consistent with the critical incident issues raised by students around the organisation and expectations of assessment, the scale response items showed that the majority of students valued clear requirements and objectives for positive assessment outcomes. Both programmes provide information on assessment requirements, for example, assessments times, formats, and weighting are provided at the beginning of each year, along with marking criteria and standards and trial assessments (Crooks, 1994; Rust et al., 2003).
However, for some students, the presentation of this information clearly has not been effective. Actively working through this information in conjunction with assessment criteria, standards and models for the various assessment formats is advocated (Rust et al., 2003). Clarification of assessment concepts and their application would also support students to develop their self-evaluation skills, as indicated previously.
Students indicated how their experiences of poor feedback, unanticipated grades, and inadequate expectations can interact to exacerbate negative assessment experiences and students’ feelings of stress. This is consistent with findings that examinations and grades were reported as being the most stressful of curriculum activities at all year levels for Adelaide students, whilst clinic-based stress increases with seniority of the student (Sanders and Lushington, 1999). Why some students have worse perceptions and experiences than others of the same assessment practice may be explained by the range of previous
“experiences, motivations and perspectives” of assessment that students use to interpret their current experience (Sambell and McDowell, 1998, p391). However the current study design did not enable exploration of these individual differences. A qualitative study using individual interviews may enable us to understand these relationships and better support students to reduce the negative aspects of assessment.
Despite differences in assessment methods, students from both Dental Schools agreed on the rating of importance of different assessment methods. Notably, the majority of students valued patient-scenario examinations, which are based on the PBL process, most highly for assessing their learning. This was
supported by comments from students that they preferred assessment requiring demonstration of their understanding and application of learning to relevant situations, which is analogous to professional clinic practice, rather than presenting facts in isolation. For all students, the next most important method of assessment was practical/laboratory examinations. This was consistent with their comments on the relevance of clinic assessments for their future work and the amount of time they spend in clinic working with patients, especially in fifth year. It was interesting to note that there was a significant difference in perception between the fifth-year Trinity students and the other groups regarding vivas, ie, Trinity students rated vivas as more important than the other student groups. As the vivas in Trinity involve working with patients, it is likely that students value the relevance of this method of assessment to their future work.
Assignments and group-work tasks were rated of lower importance by students from both Dental Schools.
Probably the most contentious issue of assessing group work relates to deriving grades and assessing competence for individuals from group work (Habeshaw et al ., 1993; Freeman, 1995), as noted by students in both schools. It has been suggested, therefore, that an individual's grade from group work should be moderated by assessment of group processes (Freeman, 1995), using peer- and/or self-assessment (Freeman,
1995; Lejk et al ., 1996) to address the group's learning processes, based on clear guidelines for monitoring processes (Williams and Williams, 1995).
A limitation of this study is that the number of participants in each group and the voluntary nature of participation mean that the results may not reflect the experience of the year level cohorts. However there are sufficient numbers to provide a broad view of experiences and attitudes among these student groups. As demographic data were not collected, it is not possible to draw any conclusions about the variation of perceptions with gender, age or students’ origins (eg country of birth, previous education level). Regarding experience of students in other year levels of these programmes, separate work would need to be done, eg, with first - year students for whom the transition experience would be the focus.
The study does not necessarily afford generalisation across other sites. However, there was consistency in the issues highlighted by the students' experiences across the different year levels and programmes and between the two parts of the survey. It is also clear that many of the issues raised are found commonly in the literature on assessment, independent of PBL. For example, students need clear expectations (Seale et al.,
2000) and staff and students need support to effectively use standardised criteria and grades (Rust et al.,
2003; Manogue et al., 2001). Assessment can be a positive motivator for learning through its relevance to current and future work (Seale et al., 2000) or as a negative motivator depending on previous experience
(Sambell and McDowell, 1998). There is also broad recognition by students of the value of feedback to support their learning (Sambell and McDowell, 1998; Seale et al., 2000; Duffield and Spencer, 2002).
Monitoring of assessment experience from dental students' perspectives has demonstrated that some students have positive experiences and perceptions of assessment but, for the majority, negative outcomes prevail.
This has occurred notwithstanding attempts to match the learning and assessment experiences within a PBL philosophy and to support students. Considering the wide variety of student goals, previous experiences and values, no course can present an ‘ideal’ assessment scheme for each individual. However, this study has highlighted students’ perspectives on the effectiveness of assessment processes and practices in supporting learning and has identified areas where further improvements can be made. In summary, the main outcomes of this study are:
Third- and fifth-year students at Trinity and Adelaide Dental Schools reported similar experiences and perceptions about assessment.
Students were concerned about grades and associated standards of performance. Students may therefore benefit from participating in activities with tutors that help clarify these standards, and involve practice in the application of assessment criteria and standards.
Students valued highly tutor feedback on their learning, especially for written examinations and in clinic assessments. Immediate and constructive feedback was perceived as being most helpful, whereas students were most perturbed when given generic feedback, little explanation or no feedback at all.
The information provided currently to students about expectations was perceived to be ineffective.
Students wanted more guidance in assessment, and perhaps this should involve interactive discussion throughout the year.
Students perceived that PBL scenario-based examinations provided the most important method for judging their learning. These scenarios should be used whenever possible in examining students as they assess students’ learning in a manner consistent with their future practice. In contrast, students rated group-work tasks lowly, indicating that improvements needed to be made in the way they are assessed.
The Australian Dental Research Foundation (ADRF) and Dr Mary Kelly, Trinity College, Dublin, are gratefully acknowledged for their support. The ADRF awarded an Undergraduate Vacation Research
Scholarship to Elaine Lim when she was a third-year undergraduate dental student at Adelaide. Dr Kelly enabled Elaine to visit Trinity College and provided help and assistance during her visit. Students at both
Trinity College and Adelaide are gratefully acknowledged for their participation in this project.
Biggs J. (1996) Assessing learning quality: reconciling institutional, staff and educational demands.
Assessment and Evaluation in Higher Education 21: 5-15.
Biggs J. (1999) Assessing for learning quality: I. Principles. In : Teaching for Quality Learning at
University.
Buckingham: The Society for Research into Higher Education and Open University Press. pp 141-144.
Boud D. (1990) Assessment and the promotion of academic values. Studies in Higher Education 15: 101-
111.
Bowen JL, Irby DM. (2002) Assessing quality and costs of education in the ambulatory setting: A review of the literature . Academic Medicine 77: 621-680.
Brookfield S. (1990) The Skilful Teacher: On Technique, Trust and Responsiveness in the Classroom . San
Francisco: Jossey-Bass.
Crooks TJ. (1994) Assessing Student Performance . Campbeltown: HERDSA. pp 438-470.
Davenport ES, Davis JEC, Cushing AM, Holsgrove GJ. (1998) An innovation in the assessment of future dentists. British Dental Journal 184: 192-195.
Duffield KE, Spencer JA. (2002) A survey of medical students' views about the purposes and fairness of assessment. Medical Education 36: 879-886.
Freeman M. (1995) Peer assessment by groups of group work . Assessment and Evaluation in Higher
Education 20: 289-299.
Habeshaw S, Gibbs G, Habeshaw T. (1993) Assessing group project work. In : 53 Interesting Ways to
Assess Your Students . Bristol: Technical and Educational Services Ltd. pp 93-97.
Kelly M, McCartan BE, Schmidt HG. (1999) Cognitive learning theory and its application in the dental curriculum. European Journal of Dental Education 3: 52-6.
Lejk M, Wyvill M, Farrow S. (1996) Group learning and group assessment on undergraduate computing courses in higher education in the UK: results of a survey . Assessment and Evaluation in Higher
Education 22: 81-91.
Lindblom-Ylänne S, Lonka K. (2001) Students' perceptions of assessment practices in a traditional medical curriculum. Advances in Health Sciences Education 6: 121-140.
Manogue M, Brown G, Foster H. (2001) Clinical assessment of dental students: values and practices of teachers in restorative dentistry. Medical Education 35: 364-370.
Metcalfe DH, and Matharu M. (1995) Students’ perception of good and bad teaching: report of a critical incident study. Medical Education 29: 193-197.
Minichiello V, Aroni R, Timewell E, Alexander L. (1990) Analysing the data and writing it up. In : In-depth
Interviewing: Researching People . Melbourne: Longman Chesire. pp 284-307.
Mullins G, Wetherell J, Townsend G, Winning T, Greenwood F. (2001) Problem based Learning in
Dentistry: The Adelaide Experience . Adelaide: The University of Adelaide.
Nendaz M, Tekian A. (1999) Assessment in problem-based learning in medical schools: a literature review.
Teaching and Learning in Medicine 11: 232-234.
Ott L. (1988) Introduction to the analysis of variance. In: An introduction to statistical methods and data analysis . Boston: PPWS-Kent Publishing pp 422-426.
Ramsden P. (1992) Learning to teach in higher education . London: Routledge pp 62-85.
Rees C, Sheard C, McPherson A. (2002) Communication skills assessment: the perceptions of medical students at the University of Nottingham. Medical Education 36: 868-878.
Rowntree D. (1987) Assessing Students: How Shall We Know Them. London: Kogan. pp 15-33.
Rust C, Price M, O'Donovan B. (2003) Improving students' learning by developing their understanding of assessment criteria and processes. Assessment and Evaluation in Higher Education 28: 147-164.
Sambell K, McDowell L. (1998) The construction of the hidden curriculum: messages and meanings in the assessment of student learning. Assessment and Evaluation in Higher Education 23: 391-402.
Sanders AE, Lushington K. (1999) Sources of stress for Australian dental students. Journal of Dental
Education 63: 688-696.
Seale JK, Chapman J, Davey C. (2000) The influence of assessments on students’ motivation to learn in a therapy degree course. Medical Education 34: 614-621.
Wetherell J, Mullins G, Hirsch R. (1999) Self-assessment in a problem-based learning curriculum in dentistry. European Journal of Dental Education 3: 97-105.
Williams A, Williams PJ. (1995) Problem based learning in technology education. Research and
Development in Problem Based Learning . 3: 479-490.