Notes of the main points discussed during the afternoon

advertisement
UCML Plenary Meeting Seminar
REF2014 and beyond
03/07/15, Europe House, London
1. UCML survey of members on experiences of the REF2014 for their UoAs
Naomi Segal (UCML Vice Chair, Research) reported on the outcomes of the REF survey (see PPT slides
posted on the website in advance of the meeting and also sent to the speakers in advance). Many thanks to
all those who took the time to complete the survey and provide useful information and feedback and to
Teresa McKinnon for technical help. Main points:
 QA.2 Comparison of REF2014 with RAE2008. Many felt this had been a more onerous process this
time around especially re Impact. But also better as more professional (good to be in large units).
 QA.3 Concerns about funding - whether it will reach dept/school, or will be cut anyway (winner
takes all arrangement).
 QB.1 Whether the new Sub-panel structure was beneficial - equal yes and no. Yes: Compatible with
School structure. No: departmental (language-unit) identities invisible. Call to include East Asian
studies in UoA 28
 QB.3 Subdivision - mostly done on basis of internal incompatibility in Celtic studies and Linguistics.
 QC.1 Was there an HEI steer towards particular outputs? –Yes: monographs and journal articles (the
latter preferred over book chapters). No: no guidance or unhelpful guidance. Effectively stress on
traditional outputs – in contrast to more radical outputs.
 QC.2 Double-weighting –generally under-used bcs of perceived risk & unclear guidance.
 QC.3 Interdisciplinarity - generally well catered for but many felt lack of expertise on sub-panel.
Cross-referral Either within or between Panels.
 QD.1 Metrics concerns – concern about prioritising numbers over qualitative evaluation. Non English
Language journals problematic for metrics.
 QD.2 Open Access submission for next REF – most HEIs providing guidance, some none.
 QE.1 Satisfaction and morale – High even when doing well in relative terms. Low if bad outcome;
Mixed when lack of understanding of outcomes occurs (we have done well but we don't know why)
or raised pressure for next REF
 Example of a ‘Smaller languages’ Italian studies: RAE reflected strong sense of identity & successes;
with REF panel structure, feeling rather invisible & unhelpful feedback.
 QE.2 Feedback to HEFCE:8 slides on a variety of areas. Metrics & weighting; Impact; Preparation &
process; Panel structure; Feedback provision; Games playing; Criteria provided; Overall views; Other
comments.
2. Simon Green, Area Studies panel




Amalgamated panel from RAE2008. Found lots of commonality between different fields of AS, so
panel worked well. Reduction of submissions noted, which potentially reflected many Universities’
view, rightly or wrongly, that bigger individual submissions were better.
Process – very sophisticated and improvement from 2008. Online tools, metadata creation systems
to facilitate interrogation of submissions. In AS two assessors read almost everything submitted.
Anecdotally, there was a high degree of consensus between assessors over the grade individual
items received. Outputs were not judged by where published. Originality, significance and rigour
were the only measures. What this means is that as a panel, we have high level of confidence in the
results.
AS received only very few applications for double weighting of monographs. Qualified conditions
under which it could be used which led to a conservative approach to it.
Environment and impact - data useful and template likewise for environment. Impact case studies
useful to see what a high grade impact looks like (created over time and not the same as
dissemination).
1

Feedback to HEIs might look a bit bland, but the panel did not want to give institutions opportunity
to draw inferences from this about the performance of individual elements of a submission.
Q & A:
a) What did you do about submissions which you felt could have gone to other panels?
We didn't make this judgement and just assessed the outputs before us. These came from a wide range
of disciplinary backgrounds, including mainstream quantitative political science, as well as economics,
anthropology, sociology and literature.
b) Should individual scores be circulated?
Don’t agree as this would quickly become a method of performance assessment and open a legal can of
worms. It would also put panel members in an impossible position when scoring.
c) Is double marking in AS unusual (if practices are different across panels that is of concern)
Combined nswer from all panellists: Many sub-panels double-read everything, but there was variation in
practice (UoA 28 double-read ca 25%).
d) Was there a contrast between lang and non-lang AS areas?
Not sure that is possible due to the variety of material. We certainly assessed material in a very wide
range of languages
e) What correlation is there between size of submission and success (was a large submission more
successful)?
f)
Not necessarily.
Research environment and quality of output - did they correlate?
Haven’t done the analysis on this specific question, but they were assessed in completely different ways:
output profiles were the result of a cumulative process which lasted several months, whereas
environment sections were scored during panel meetings in sub-groups (always taking into account
conflicts of interest).
Simon Green had to leave the meeting at this time and was thanked for his helpful input.
3. Steven Hill Head of Research Policy HEFCE.
PPT Slides to be supplied for the website following the meeting. Many thanks to UCML for a useful
survey which HEFCE will take into account as feedback on the exercise. Main points made:
 Compared with RAE, REF2014 has been an improvement for ML and linguistics. Decrease in
submissions in Area Studies and ML.
 Impact vs outputs - positive relationship between these two (with a few exceptions where a high
score on output didn't correlate with high score on impact).
 Funding allocation (England) for QR – total funding increased as STEM protection removed. ML and
AS - small increase in ML funding but drop in AS funding relating to drop in numbers of staff
submitted. The Funding Bodies do not seek to control modes of funding distribution within
institutions.
 Impact case studies:
o Great resource for understanding impact of research.
o Database of case studies (see HEFCE REF website) put together and analysed by KCL and
Digital Science. Text mining approach to interrogate key topics in impact. Mapped topics
onto UoAs.. Results show that there are types of impact commonly associated with
particular panels. Some common across all areas e.g. public policy. Also identified the fields
of research described in the underpinning research section of case studies and mapped onto
36 units of assessment and impact topics. Impact = multidisciplinary.
o ML and Ling - most common topics e.g. film, print media/publishing, literature, public
engagement , regional languages of British Isles. Fields of research - Historical studies,
cultural studies, linguistics, language studies. Top 3 historical, cultural, literary studies
common across whole of Main Panel D.
o Language studies and linguistics are also found as underpinning research in other panels e.g. represented in psychology
 More outcomes of REF2014 evaluation process will be forthcoming. Three phases:
o Evidence and evaluation
o Informal dialogue
o Formal consultation (autumn 2015)
 Key issues under consideration (based on feedback and evaluation reports):
o Panel recruitment (not representative/diverse enough)
o Staff selection (universities choose which staff to submit)
o Unit of Assessment structure (some areas which need review e.g. Engineering)
2
o
o
o
Interdisciplinary research (ensure it is not discriminated against)
Impact (case studies work well but mechanics of this needs reviewing)
Environment and metrics (the potential to enhance peer review with additional quantitative
indicators)
4. Kersti Börjars (Chair) and Charles Forsdick (Deputy Chair) of sub-panel 28 ML and Ling
(PPT slides supplied for website after the meeting)
 At the assessment stage, the panel was expanded so as to consist of 29.5 academics (0.5 shared with
sub-panel 25), 5 user members to assess impact. 5 specialist advisors appointed for langs not
represented, or where existing expertise could not be used because of conflicts of interest.
 One third reduction in submitted UoAs from 2008 due to the change in panel structure. Largest
submission 116.75 and smallest 3.
 17.4 % were early career researchers [ECRs].
 Encouraging that there was a decrease in average number of outputs per person since this can be seen
as evidence that the new approach to equality and diversity worked..
 Staff submitting with special circumstances and esp ECRs did no less well than anyone else.
 Results - 70% with 4*/3* overall
 All types of outputs were submitted to sub-panel 28, though there was a preponderance of journal
articles and a high number of book chapters
 Quality notably improved since 2008
 204 requests for double weighting (as opposed to 800+ in History). Difference in approach to this across
institutions (attitude to risk).
 Impact:
o KCL initial evaluation shows media most cited in Main Panel D
o Time-lag between publication and impact longest for Main Panel D
o Perception of general concern in the academic community at outset that Main Panel D would
underperform compared to other panels. In fact great breadth of impact - communities, public
bodies, charities, NGOs, general public. Also considerable international impact. Interesting cases
of serendipity in addition to those of carefully planned impact
 Environment
o Good evidence of sustainability (high number of ECRs). Good grant income, from a wider range
of sources, and increase in PGRs.
o Some under emphasis of strategy: often just catalogue of individual plans.
o Some improvement evident in support for staff.
 Two further issues:
o Practice-led and creative outputs – contrary to assumption, translation and creative writing are
permissible, and there are various initiatives – e.g. by Nick Harrison at KCL – to increase their
inclusion.
o Absence of public policy from Main panel D - need to explore ways to take research to policy
makers.
5. Q&A for Panel of Steven Hill, Kersti Börjars and Charles Forsdick , chaired by Naomi Segal
a) Will there be descriptors for creative writing and translation to help support future submissions? Will
creative writing be in English so should this be in English panel?
AHRC doesn't currently include translation among its indicative outputs from funded research, but has
always been permissible in REF. Probably won't be separate criteria. There has been a tendency to
focus too much on what ‘HEFCE thinks about’ things but it is not HEFCE but we ourselves (peers) who
read the outputs. For creative writing there was additional calibration with English sub-panel.
b) Did it work well to have linguistics included with ML?
There is a variety of disciplines included and language is the common factor. Panel calibrated
understanding of quality levels by reading submissions across fields of study which showed that there
is a shared sense of quality. There is a sense that linguistics gets more and bigger grants, from a
wider range of possible sources, and find it easier to recruit PhD students, but these factors were
taken into account when assessing environments.’
c) HEIs still steer towards types of outputs which do not represent range of outputs produced.
Location and ranking of journals is not a key factor.
3
d) Will all outputs eventually need to be submitted by open access route?
The HEFCE statement on this is: ‘For the next exercise, articles in journals and conference
proceedings will need to be published in open access. Institutions with other types of outputs in open
access can be rewarded for this through the environment assessment.
e) Will corpora be taken into account in future?
They already were this time.
f) Is there a likelihood in future REF of outputs being submitted collectively by units rather than
individually, as now?
Yes it is on the agenda.
g) How is collectivisation of outputs to be handled, (in parallel to that on environment and impact?)
Might UoAs have to submit a minimum of one item per FTE but otherwise some staff returned might
include more than the current maximum of four items, up to the total permitted for the unit?
HEFCE having workshop about staff selection. Possible minimum requirement of output per staff
member and a total number of outputs within which high and low output staff can be included.
h) Should make sure that REF doesn't dictate what we do in terms of research.
Less traditional outputs are fine if they can be assessed for originality, significance and rigour.
i) How to stop institutions second guessing what we are going to be doing, hence downgrading of
edited works/chapters in books?
This is not a panel or HEFCE driver but is being dictated by HEIs, who need to be better informed. It
is possible that sometimes short cuts are taken determining why certain outputs are favoured, for
convenience sake, but this is not desirable or the intention of HEFCE/REF processes.
j) Lag between output and impact (half-life is longer)in arts & hums.Problem especially when staff move
institution as impact will stay with institution where the research was carried out.
This is an excellent area to explore in future REF. Portability has arguments on both sides as impact is
institutional rather than personal (but IPR is a problem). 20 year limit imposed as it is difficult to track
impact over a longer time. Additional impact accrued on existing cases submittedwill be an issue for
next REF.
k) Cost and burden of REF - how will factors such as staff time be included?
Things which can be monetised will be included but factors such as morale cannot be included.
Headlines are that it cost more than RAE but as proportion of funding allocated it is relatively low cost.
And more efficient than other research allocations exercises. Important to keep QR income rather
than putting it into grants as happens in other countries, although N.B. that there are other countries
considering instituting a system like ours. Humanities need to get used to different ways of assessing
research so that we don't lose morale (scientists are more used to applying for many grants failing
with some).
l) East Asian studies - where should this have been submitted as some people didn't submit at all
owning to confusion?
I would prefer to substitute by: This issue holds also for other languages, like Middle Eastern
languages. It is unlikely that the panel structure could ever reflect all institutional structures. A
submission must be based on institutional structures ie on a research environment with a joint
structure for staff support, facilities, support for PhD students etc; this submits as a unit to the most
appropriate sub-panel. At sub-panel, there will always be a system for cross-referring outputs to
ensure that each output is read by someone with appropriate expertise. Thus in this case, where East
Asian studies are included in a broader school of languages they could have been included within its
submission). There will need to be clearer guidance on this in future exercises.
m) Co-authored outputs – how might they be handled?
Not a problem for panel. Probably more common in linguistics. As long as there was a plausible claim
that the submitting person had contributed substantially to the article, it was treated in the same way
as a single authored article. The only exception was when the co-authors were in the same UoA, in
which case the REF guidelines were followed.
n) Edited volumes - submitted by editor as editor, not just for article/s – how is this evaluated?
Not about all the work done in editing but in the relevance to the criteria (eg convening a research
group for a conference and then using outcomes in edited volume; quality of introduction and editor’s
own authored contribution). AS always, the issue is originality, rigour etc.
o) What can be done about colleagues not submitted bcs theys didn't fit the narrative and this then
leading to contracts being changed (eg to teaching only)?
This was not within the control of a REF sub-panel, but there was general agreement and concern
that REF results were unfortunately being used in some cases ….
4
p) We have heard of outputs in languages other than English being excluded. This is relevant to other
areas of university life e.g. promotion.
Outputs in other languages were welcomed by the panel, although other sub-panels required a
summary of the content in English.
The afternoon workshop was closed with warm thanks to all the speakers. It was particularly agreed that
Steven Hill’s participation had exemplified how HEFCE is receptive to the experiences of the sector.
5
Download