ASSESSMENT REVIEW GROUP

advertisement
ASSESSMENT REVIEW GROUP
MINUTES:
Meeting of Thursday 22 November 2007
Present:
Andy Sloggett (Chair), Craig Higgins, Quentin Bickle, Graham Clark, Sue Horrill, Rory
Donnelly
In attendance:
Stuart Anderson, Rhian Daniel (jn attendance for items minuted under 71 and 72 below)
Apologies:
Andrew Hutchings, Chris McMahon, Karen Ormsby
Note: all indicated papers are held with the record copy of the minutes, available from the Quality Officer.
69
69.1
Introduction
NOTED: Welcome to SA, Associate Dean of Studies, who would be taking on the Chair of the Quality
& Standards Committee to which ARG would be reporting; and Rhian Daniel, presenting analysis of
module grade distribution figures.
69.2
NOTED: Correction to the agenda – date should have read November not June.
70
70.1
Minutes and Matters Arising
AGREED: Minutes from the last meeting, 23 October 2007, were approved.
70.2
NOTED: Table of Matters Arising from the minutes, as updated 20 November 2007. Various items had
been indicated for report to the meeting. Since this was expected to be the final formal meeting of
ARG, remaining actions would be noted in the final ARG report (expecting to report some as
complete, and list the remainder as due for follow-up).
71
71.1
Module grade distribution analysis – Paper J
NOTED: AS gave background on the development of this analysis, carried out by R. Daniel, who then
gave an overview of her report and tabled a revised version produced following discussion with C.
Frost earlier that day.
(i)
Student ability modelling was a key feature of the analysis – attempting to identify students
who performed well across all their modules, then adjust for this in identifying which modules
might be easier or more difficult.
(ii)
Figure 1 plotted the distribution of module GPAs, adjusted for the ability of students taking
each module. The relevant data was set out at Table 4.
(iii)
There was clear evidence of heterogeneity in module marking, even after adjusting for
different modules being likely to attract different cross-sections of students; the confidence
intervals for quite a number of modules fell either above or below the overall mean.
(iv)
While table 4 also showed the percentage mix of student from different MScs on each module,
CF had suggested removing data on which MScs’ students did best or worst on particular
modules (in terms of mean GPA against the overall module GPA), as the interaction
phenomena meant this would not be helpful to consider. Likewise for MScs, it would not be
helpful to look at the which modules showed best or worst MSc-specific mean GPAs
compared within the overall mean.
(v)
The analysis also examined results from modules which are assessed using groupwork; this
proved to have a small but clearly apparent effect on overall module grade averages. Modules
which used a groupwork component in assessment were clustered at the higher end of the
GPA scale, with the overall average GPA for groupwork-based modules being moderately
higher than for individually-assessed modules.
(vi)
Figure 2 plotted the distribution of MSc GPAs, and this was felt to be reassuring – individual
course averages were all very close to the overall average (of just above a “B”). Only two
groups (Social Epidemiology students from UCL, and students from the MSc Vet Epi joint
course) had average grades within a confidence interval that fell clearly short of this.
(vii)
Small correction noted to table 5 – “number of students” should actually read “number of
grades”.
71.2
NOTED: Discussion on potential future adjustments to the analysis methodology:
1 of 4
(i)
(ii)
(iii)
(iv)
(v)
Including exam (and project?) grades in “student ability” adjustments could aid overall
robustness.
R. Daniel also considering further work to look at MSc interactions within modules.
It would be very helpful to run a retrospective analysis to allow a longitudinal view of
results/trends over several years.
It would definitely be worth widening the analysis to cover DL modules – including a total-level
comparison of face-to-face and DL modules, and combined overview of individual module
results from across both modes (should hopefully be completely mixed, but interesting to see
any trends in the distribution).
There could also be an opportunity to extend such analysis beyond just ‘assurance of
assessment’ to cover wider quality indicators, e.g. looking at groups such as scholarship
students, special case entrants, nationality of country of origin, etc.
71.3
AGREED: AS to follow up with R. Daniel and CF on making further updates to the analysis – including
specifically for R. Daniel to include unadjusted (raw) figures, as well as those adjusted for student
ability.
Action: Chair ARG, R. Daniel, CF
71.4
NOTED: Discussion on potential uses of such analysis. The D&H and RSHR exam boards had been
asked about whether they might wish to use such results in review of borderline candidates, e.g. to
make allowances if they had chosen particularly difficult modules (those with a CI range falling
completely below the overall mean GPA); but those boards did not feel that this was a particularly
good idea. There was general support from ARG for the principle that both module organisers and
moderators should review longitudinal data about grade distributions annually; and that exam boards
could have visibility of the same data (although with some guidance advising against over-precise
use). Further discussion on interpretation, e.g. how to evaluate whether particular modules or MSc
courses were becoming harder.
71.5
AGREED: To recommend that such an analysis be produced as standard each year, incorporating
any further suggestions on presentation/methodology and inclusion of longitudinal data – for use by
Module Organisers and Moderators (as a reference tool to inform assessment-setting, marking
guidelines and moderation), and for visibility by exam boards. Appropriate resources / staff time might
need to be earmarked for this. Recommendation to be fed in to QSC 05 December 2007, which would
be considering a further-revised version of the same analysis for this year, and noted in ARG report.
Action: Chair ARG
72
72.1
Project lengths or page limits – Paper K
NOTED: AS gave background on past discussions about this topic, why it needed resolving (particular
issue for MSc Med Stats), and reasons for and against changing from a word count model to a page
length model for MSc projects. How to count or allow for tables was the main area of contention.
72.2
NOTED: Work by AS and the TSO had compared the set of Med Stats projects from 2005-06 to a
range of projects from other MScs. As demonstrated by the graph in the paper, Med Stats projects did
not show any significantly different trends. The graph suggested that the existing word count model
(recommended 7,000 words and maximum 10,000 words) could map across fairly well to a page
length model of recommended 35-40 pages, maximum 50 pages.
72.3
NOTED: Comments that the existing word count model was felt to work well for all MScs except Med
Stats, and therefore it would not be appropriate to discard the system already in place in favour of an
untried new model across the board. Rather, it would seem more sensible to make a page limit model
an “optional alternative” that individual MSc exam boards might choose to adopt.
72.4
NOTED: Further discussion on the pros and cons of these models, including (a) whether allowing
different project criteria could impact on the parity of academic standards across courses; or (b)
whether it was necessary or right to make different courses with different learning objectives all follow
the same model for research projects. QSC has recently set up a project review group, which will also
be looking at many relevant issues.
72.5
AGREED: To recommend that further to the existing regulations on project word counts, exam boards
should have the option to use an alternative page length model instead. This alternative should be put
forward as modelled by AS: with a standard 35-40 pages, to maximum 50 pages; following a standard
format with Times New Roman 12-point font, 1-inch margins all round, 1.5 line-spacing; and excluding
preliminary pages (up to the start of the Introduction) as well as References and Appendices etc. as
currently. Recommendation to be noted in ARG report.
Action: Chair ARG
2 of 4
72.6
AGREED: Additionally, to recommend that the regulations on the standard word count option be
slightly updated, to add a statement along the lines that all substantive content and numeric data
should be electronically submitted in a form recognisable as text (e.g. as paragraphs or tables), rather
than appearing as a ‘picture’. This should also be spelt out in student handbooks. Recommendation to
be noted in ARG report.
Action: Chair ARG
73
73.1
QAA Code of Practice on Assessment – Paper L
NOTED: RD introduced this paper; review against the QAA code showed that LSHTM’s assessment
practices all broadly align with it and may be considered as robust. However, a number of points
suggested by the QAA code would be potentially beneficial to consider further.
73.2
AGREED: Points for consideration to be extracted and included as an Annex to ARG report,
recommending further follow-up.
Action: QO
74
74.1
Assessment Scales at Other HEIs – Paper M
NOTED: This limited review of marking practice at other institutions suggested that percentage
marking schemes remained by far the most common approach. Linking in to past discussions about
half-point grades, LSHTM’s current limited grading scale can be seen as something of an exception.
74.2
AGREED: AS to follow up with the group by email to resolve previous discussions on half-point
grades.
Action: Chair ARG
75
75.1
Distinction Threshold for Distance Learning Courses – Paper N
NOTED: AS gave background on the consistently low rate of distinctions being awarded for DL
courses, seen as arising primarily from the fact that all modules are counted in the GPA calculation
towards the distinction – e.g. there is no option to ‘ignore the lowest two module grades’ as there is for
face-to-face courses.
75.2
NOTED: MSc Infectious Diseases had done some specific analysis of their low rate of distinctions
awarded, and suggested four types of approach that might be taken:
(i)
Lowering the distinction threshold
(ii)
Only counting the top four (advanced?) modules taken by a student
(iii)
Setting ‘better’ questions
(iv)
Guiding markers to award higher marks.
75.3
NOTED: Discussion on alternative models for calculating distinction GPAs. It was suggested that the
existing “everything counts” model should remain in place for standard pass/fail calculations. However,
distinctions might be calculated differently, by looking at the GPA from a specific subset of work (e.g.
best advanced module grades plus project grade) from among the students who passed.
75.4
NOTED:AS suggested that the heart of the debate would be to better define “what constitutes a
distinction for work undertaken through distance learning”.
75.5
NOTED: RD introduced some additional modelling analysis on results for DL MSc Epidemiology
students who passed in 2005-06. The models showed that a distinction GPA calculation looking at
students’ four best module results plus project result would produce a slight rise in distinctions; looking
at best four module results only would produce a much more significant rise. Within these alternative
models, dropping the distinction threshold to 4.30 would produce a slight rise in the number of
distinctions awarded; dropping it much further to 4.05 would produce a slight rise beyond that; but
dropping it just slightly further to 4.0 (B+) produced a much more significant jump. Extensive
discussion ensued.
75.6
NOTED: General support from the group that there should be no change to the existing pass/fail
calculation for DL; but it would be appropriate to amend the distinction threshold and use a different
GPA calculation for distinctions, especially to make DL distinction calculations more equivalent to inhouse (e.g. allowing lowest-scoring modules to be excluded from the calculations). Co-moderation of
DL and face-to-face modules would also point up the appropriateness of such a scheme.
75.7
AGREED: AS to co-ordinate follow-up on this matter, aiming to make a recommendation in ARG
report if possible, or otherwise referring this matter on to QSC for them to resolve. Action: Chair ARG
76
Audit of Recommendations from last ARG – Paper O
3 of 4
76.1
NOTED: The vast majority of recommendations from the last ARG (in 2002-03) had been very
successfully implemented; but in some cases follow-up action/resolution would be helpful.
76.2
AGREED: AS and RD to extract key points for follow-up, resolve where possible, and include any
outstanding points in ARG report (e.g. as an Annex).
Action: Chair ARG, QO
77
77.1
Date and time of next meeting
AGREED: This would be the last formal meeting of ARG. However, a specific meeting of QSC, to
consider ARG’s report and any related assessed matters, would be arranged subsequently by email to
take place early in the spring term. All ARG members would be invited. Action: Chair ARG, QO
AS/RD
29-Nov-07
4 of 4
Download