Crosscutting themes

advertisement
Joint funding bodies’ review of research assessment
Response from Department of Geography and Topographic Science,
University of Glasgow
We would like to take the opportunity to provide some feedback following the ‘Invitation to Contribute’ to the
review of the RAE. We have also fed back our thoughts to both the University of Glasgow and the Royal
Geographical Society with the Institute of British Geographers.
The ‘philosophical’ question
The review asks us initially to consider the basic ‘philosophical’ question of “what is meant by quality in
research?”
Our view, which shapes our further responses below, is that quality here should indeed be taken first and
foremost as ‘intellectual excellence’. It should entail highly informed judgements being made about the
conceptual and substantive ‘contribution’ made by the research relative to the current state of the relevant
field of inquiry. It should take seriously: the theoretical and methodological innovation; the sophistication of
the research questions asked and the reasoning pursued; the rigour of the analysis and interpretation of
(where appropriate) empirical data; the clarity of presentation; and also the potential applicability of the
claims and findings for both further research and (where appropriate) to ‘real world’ issues.
To our mind, such dimensions of quality cannot adequately be judged using supposedly objective indicators,
such as citation indices, the sizes of the research grants that fund the research, the numbers of RAs
employed to do a piece of research, etc. These indicators can provide some guidance, but at bottom there is
no substitute for highly informed assessors reading the published output with an eye to the kinds of quality
criteria listed above. To envisage any other process would be to reduce the exercise to a mechanical audit
that is wholly inappropriate to the evaluation of research quality, most likely leading to serious flaws and
injustices in the whole assessment process.
We therefore most definitely favour retaining a system of expert review for the RAE. The other systems
strike us an inappropriate, unworkable and/or prone to increasing rather than reducing workload on already
stretched academics, as we elaborate below.
We realise that our view may reflect the specificity of the Geography experience, in that by and large the
discipline in the UK was happy with the way in which the Geography RAE Panel conducted its business and
arrived at its conclusions. Provided that the discipline can ‘trust’ its Panel, and that there is a working
relationship between the Panel and the Departments, then we would argue that this remains far and away
the best arrangement. What the Geography RAE Panel did – which maybe other panels did not – was to
make it crystal-clear to the discipline that the focus would indeed be on the quality of the submitted
publications, and that there would be a detailed reading of at least two submitted publications from every
academic geographer returned as ‘research active’. This massively bolstered the discipline’s confidence in
the exercise, and created a sense of trust because the feeling was that the right things were being assessed
through the right process. The equally clear message that other indicators would be used in a secondary
fashion also played well in this respect.
Systems of review
To reiterate, we favour expert review, and we now answer the specific questions asked about this system of
assessment:
a.
b.
The assessment should be combining prospective and retrospective materials (statements of future
research strategy combined with accounts of past achievements and present structures; together
with the record of submitted publications).
Our main point is that the basic data on which the assessors should work must be the submitted
publications, in which case we accept that subjective assessments will be made. Any objective data
– to do with research grants, numbers of PGs, etc. – should be used in a secondary fashion, to
inform the assessment process and to confirm, if needed, the judgements arrived at on the basis of
reading the submitted publications. We would particularly underline the importance of not relying at
heavily on evidence of the volume of research grant income, which really cannot be taken as any
great indicator of research quality: many of the very best research projects are run on limited funds.
c.
d.
e.
We would also caution against any simplistic use of citations and other reputational data, which may
be seriously flawed and contain many inherent weaknesses and ‘biases’.
Our preference would be for the level of assessment to be the ‘Department’ or entity making a
submission. This being said, clearer indications in the feedback about the relative achievements of
different research groups in a Department would be helpful.
We do not feel there to be any sensible alternative to organising the assessment around subjects or
thematic areas. We suspect that any other mode of organisation would risk losing the support of the
academic research community, even if it might suit the managers of HEIs and funding councils.
The major strength of expert review is precisely that it can assess quality, whereas the other
proposed systems cannot (at least not fairly).
We consider that the use of an algorithm system would be wholly inappropriate and could not begin to make
proper judgements about quality. All of the metrics mentioned in the review document strike us as
problematic and open to being wrongly collected and falsely interpreted, and why ‘measures of financial
sustainability’ should be considered to have anything to do with research quality seems strange (at least in
relation to Geography).
We consider that there might be merits in self-assessment, but that the danger would be of Departments
becoming so fixated on this goal that workloads would expand exponentially. At the moment the RAE
provides clear parameters for what needs to be done: responding to these does take up a lot of staff time,
but trying to set in place a full internal process – given the inevitable ‘paranoia’ that Departments will feel
about needing to do this the very best that they can – would most likely lead to an unhelpful escalation of
internal work. At present, there is a measure of self-assessment, in what is claimed in RA5 and RA6, and
also in decisions about who to submit as ‘research-active’, and we reckon this to be a suitable level of selfassessment that should not be exceeded. Moreover, there would still need to be some external assessment
of our respective self-assessments, so it is unclear what would really be gained by going down this route.
We consider that historical ratings are also inappropriate, as such a measure risks rigidifying the country’s
research profile, giving little encouragement to HEIs and Departments currently doing less well, and probably
leading to precisely that complacency in the currently top-rated HEIs and Departments that the RAE was
initially designed (at least in part) to combat. We doubt that any subsidiary system of assessing ‘sharp’
alterations in the research quality of particular units would be able fully to counter the inertia of historical
ratings. Moreover, such a subsidiary system would demand there to be external assessment, probably a
measure of expert review, so, again, it is unclear what would be gained by going down this route.
Crosscutting themes
Our broad view here is that we favour retaining the RAE in something like its present form. This is not to say
that we do not have reservations about the exercise even in its present form – about the inappropriateness
of importing simplistic notions of ‘competition’ into an environment that should be marked more by cooperative exchange; about the dangers of eroding older standards of scholarship, notably with respect to
longer-term serious projects which may not yield published outcomes on a five-year cycle; about the
negative effects on the mental health and morale of many academics caused by RAE pressures – but such
matters appear to be (perhaps regrettably) beyond the purview of the current review.
Taking each of the specific ‘cross-cutting’ questions in turn:
What should/could an assessment of the research base be used for?
a) Setting future funding priorities for national funding of a discipline (Geography) as a whole. This would
in part use the exercise to identify those Departments, etc. who have earned a ‘reward’ through their
high-quality research endeavours, but such an approach must be done in conjunction with:
b) Identifying priority areas for more resources: ie. identifying subject areas within each discipline that need
an influx of personnel and funds to improve in the future (ie. the exercise could be constructive in
assisting developments in certain key directions, although there would have to be great care shown in
not simply prioritising ‘fashionable’ subject areas over ongoing sound scholarship in all areas).
c) Ensuring that inappropriate use of ratings are avoided (eg. explicit definitions must be made about how
and where ratings are used).
d) Ensuring that combined use of assessment data by Funding Councils and Research Councils avoids
any of the pitfalls that arise from the above (but noting that anything reducing the assessment burden
would also be welcomed).
How often should research be assessed? Should it be on a rolling basis?
a) Five year assessment periods are satisfactory. They are long enough to reflect progress and attainment
of goals in particular research programmes.
b) As some research programmes emerge and evolve within assessment periods, some element of rolling
assessment is to be welcomed.
What is excellence in research? (see also our initial response above)
Excellent research is a significant contribution to the advancement of knowledge in a particular subject
area of a discipline, or possibly for a discipline as a whole (where possible), and should normally, but not
exclusively, be published in either substantial peer-reviewed monographs or highly cited international
peer reviewed journals. Occasionally such work will appear in edited volumes or proceedings symposia.
It would seem that at present research creativity and methodological excellence are more highly valued
than applicability, but there may be good reasons for suggesting that excellence in the applications of
traditional principles/techniques should receive equal rating.
Should research assessment determine the proportion of the available funding directed towards each
subject?
If the implication here is that a subject’s (discipline’s) overall – pooled, averaged – RAE rating should
determine the total funding allocation to that subject, then we would be worried about the possibility of
Subject Panels inflating grades to ensure funding. Of all the proposed criteria by which a subject’s overall
funding could be determined, the historical distribution appears to be the only appropriate procedure
because it acknowledges established, and by and large probably accepted, differences in approach and
practice within the different subjects.
Should each institution be assessed in the same way?
a) This seems inappropriate when comparing traditional research-led institutions with those who emphasise
vocational orientation.
b) The proposed “ladder of improvement” would probably be a good idea in this context.
Should each subject be assessed in the same way?
We think that the same basic assessment format should be utilised for every subject (discipline), but that
variations in the detailed practices of the different Subject Panels should be allowed: ie. we would be very
disappointed if, in any future exercise, the Geography RAE Panel was not permitted the flexibility that it
had in 2001 (which, as explained above, we valued in Glasgow and felt more broadly to have helped in
creating a relationship of trust between the Departments and the Panel).
How much discretion should institutions have in putting together their submissions?
We think that the same basic format for institutional submissions should be expected from all institutions,
albeit perhaps with some possibility for broader institutional statements to be made (perhaps to allow
certain institutions to make cases about the special, maybe more vocational rather than research-led,
aspects of their ‘mission’). Assuming that there is still to be a subject (discipline) element in an
institutional submission, at the Departmental level we would certainly want to have a substantial input to
what is claimed about us (ie. about Geography at Glasgow) and in terms of staff members who are and
are not returned as ‘research active’ (albeit appreciating the need to take appropriate advice within the
institution) We recognise that there can be benefits in institutions grouping cognate disciplines in
particular ways for RAE purposes, as already doubtless happens, but such groupings must not ‘water
down’ our research groupings and strategy statements to the extent that we are unable to mould our
research shape in accordance with current research directions in our own subject (discipline).
How can a research assessment process be designed to support equality of treatment for all groups of staff
in HE?
There are certainly risks in an RAE-type process of some systematic discrimination occurring, but we are
confident that in Geography / Glasgow no such discrimination has occurred along the lines of gender,
ethnicity, sexual orientation, age, etc. This being said, we are concerned about possibilities for
discrimination against individuals with physical and mental health problems who are unable to produce as
much research as others may be able to do so. We also suspect that the ‘disability’ aspects of the RAE
have not really been looked at as yet.
Priorities: what are the most important features of an assessment process?
Here we will simply reiterate what we have claimed already: we want to emphasise again our view that
the RAE must fundamentally be based on quality of research output, and must not revert to a ‘beancounting exercise’ (ie. of research grant income) or a count of citations in only what are reckoned to be
best journals. We welcome the way in which the Geography RAE Panel (2001) explicitly read and
assessed the quality of the published outputs, against clearly articulated criteria for each RAE grade.
Download