Final Report of the Cambridge-led Network Research Project

advertisement
Centre for Excellence in Preparing for Academic Practice
Final Report of the Cambridge-led Network Research
Project
Evaluation of researcher support programmes: assessment within development
events, and the attitudes and experiences towards academic careers provision, of
early career academics (ECAs)
Researchers: Martin Gough (UCL & Kent), Emma Williams (Cambridge), Frederico Matos
(Cambridge & UCL), Jon Turner (Edinburgh)
Executive Summary
Purposive evaluation of courses and other learning activities is important for judgements about
how useful such provision is, and so this is the case also for provision serving to prepare
participants in some way for academic practice, as part of researcher support “training”
programmes. The Vitae Rugby Team Impact Framework (2008; after Kirkpatrick & Kirkpatrick
2006) provides an appropriate reference point for analysis of effectiveness of development events.
At the most general level (inspired by Kent & Guzkowska 1998), assessment can gauge
attainment of participants (impact levels 2 and 3), constituting a main criterion for evaluation.
The project started investigating broadly and, after clarification, to proceed by collapsing research
questions (1) and (2) into (3) thus re-focused particularly on to development events with
assessment where the programme is not leading to an award and there is no accreditation (c.f.
Greenwood & Wilson 2004). Gauging the views of participants was the main focus for
investigation, but the project also solicited comment from providers. Overall, people in this
domain hold differing views, albeit complicated by diverse contextual factors, with contrasting
themes emerging for and against usage of assessment.
Research Questions
1. What range of methods and techniques of evaluation are in use by providers of
development events?
2. How effective are the methods and techniques in making and supporting judgements about
the usefulness of development events?
3. To what extent can assessment of the attainment of participants be deployed in
development events?
4. What does comparative analysis of the CROS and PRES surveys tell us about means of
evaluating the usefulness of development events?
5. What does comparative analysis of the CROS and PRES surveys tell us about other
attitudes and experiences of ECAs towards provision, including matches and mismatches
between their expectations and what HEIs actually offer?
6. What other avenues of enquiry would this investigation suggest?
Page 1 of 4
Methods
The exploratory project comprised two strands.
Strand 1: Assessment & Evaluation Practices
 Review of literature and resources. To suggest interesting examples of evaluation practice
and usage of assessment about which we were able to enquire further as part of the
investigation.
 Questionnaires and interviewing of development event providers and participants. There
was an initial information gathering process with providers, in order to follow up on
details of interesting practice. We conducted face-to-face semi-structured ‘focus group’
discussions or individual interviews with ECAs (43 individuals) in the context of
institution-based development events (observed in part by us) in which they were
participating. These participants included new lecturers and staff and postgraduate
researchers variously on a research abstract writing workshop and on teacher development
courses not essentially part of an award-bearing programme, across UCL (10),
Buckinghamshire New (8), Kent (6), and Surrey (19) Universities. We enquired about
how well the development event was working for them and what they thought about the
utility of the actual or potential assessment of their performance. We followed up to
obtain views of course participants individually via email questionnaire. We engineered a
distinct variant on this, the Intervention: the development event would have been without
assessment but for the next presentation we would introduce an additional more
substantive assessment measure, again to investigate how well the development event
works for participants, now with the more direct comparative element between each
respective presentation; the UCL based investigation mimicked this format, in this case
with the same course (of 5 sessions) run twice but in parallel, one with an additional
assignment task. We conducted some Analysis of written and other physical output in
collaborating institutions, with respect to the provision.
 Seminars/Workshops. We organised a number to serve different purposes, for
development and data-gathering as well as dissemination. We ran a workshop for
postgraduates on research writing (at the National Postgraduate Committee Conference on
14-16 August 2008) which included group discussion on the project questions’ issues.
Three events were collaborations with the SRHE Postgraduate Issues Network. Two of
those (21 October 2008 and 2 June 2009) were joint events also with the SRHE Newer
Researchers Network. 15 ECAs (from a further 12 HEIs), researching an aspect of higher
education as a field, presented their work and received structured formative and
summative feedback from the audience. Again, we engaged them and the audience in situ
in group discussion, following a paper questionnaire, and followed up to obtain their views
individually via email. The first event presenters also went away and completed a written
assignment on presenting which we marked formatively and summatively. The third of
these events (30 April 2009) was jointly organised with the SRHE Academic Practice
Network and Vitae (Yorkshire & North-East Hub). The purpose of this, and three other
presentations (at Teesside University’s ‘Enhancing the Research Culture’ conference on 3
June 2009, at the Vitae annual conference on 8-9 September 2009, and at the Third
international conference organised by the Centre for Excellence in Preparing for Academic
Practice on 13-15 December 2009) was partly to provide an update of progress in the
project but mainly to involve actual and potential institution-based participants and
providers in imparting their disinterested wisdom on the issues arising from the fieldwork
to date, encouraged by being in a setting outside their institutional context.
Page 2 of 4
Overall, as well as the 59 engaged through the project’s thorough iterative investigations, the
project engaged directly but informally as participants in group discussions at least a further 20
ECAs: the ECAs engaged overall represent at least 26 institutions between them. The project
engaged directly development event providers across 22 HEIs and sector NGOs (excluding CETL
Network member bodies and institutions of others who attended the Vitae and CETL conference
sessions for the project). There are separate documents, including for fieldwork instruments with
ECAs (the information and consent form, interview/discussion topic guide, questionnaires and
presentation assessment proforma), powerpoint presentations for seminars/workshops, the
Autumn 2008 project interim report.
Strand 2: Comparative analysis of the CROS and PRES survey data
Themes present in the PRES data was matched as far as possible to the CROS data in the
Cambridge surveys, looking at the theme of the expectation and experiences of ECAs of
development events, to identify areas of potential information for further investigation in partner
HEIs’ surveys, with a view to taking further the development of means of evaluating the
usefulness of development events, in turn informing the design of activities in Strand 1.
Findings
The surveys analysis found the common perception of ECAs to be that development is ‘done to
them’ (by and presumably for management) rather than by them and for them. Fieldwork analysis
suggested that participating is, nonetheless, seen by some ECAs in certain contexts as valuable in
its own right for learning. These activities are perceived to be valuable regardless of whether or
not they are assessed, even by those on the teacher development programmes, which are normally
structured to assess attainment as a necessary condition of leading to an award. There was a
suspicion that assignments are designed with accreditation and awards in mind rather than for
immediate suitability. Where assessment is seen as important, the concern of this group is that the
practice of teaching should be judged, perhaps in place of some of the more abstractly theoretical
existing assignments. Likewise, one view expressed within this group is that there would be little
point in conducting a written essay-like task on ‘time management’ as part of developing that skill
area (of the Research Councils framework). Otherwise, participants at development events which
are not normally complemented by an assignment task are largely open to rather than sceptical
about the value of being assessed.
Open fields analysis of CROS is also consistent with the claim that some research staff are
looking for a richer, deeper learning experience, if they are to be engaged properly at all in
development provision, and assessment could be a means to precipitate that. There is certainly
support amongst interviewees for celebrating more explicitly what is largely, where it currently
happens, an 'implicit assessment syndrome', comprising qualitative formative feedback, including
from peers, within bounded development events. This would be a prime illustration of assessment
that is for learning, facilitating the learning happening, as opposed to assessment of learning,
determining the level attained, where the latter would be required in graded form where provision
also leads to an award. The Intervention brought out more strongly that the group who were asked
to do an assignment in addition outside the class sessions appreciated the opportunity more than
would the group not set this assignment.
There is a strong view that there would be little point in generically assessing the written output
from a research writing workshop, given that the researcher’s supervisor or mentor has that role
on an ongoing basis. By contrast, the abstract writing workshop deployed workable and
productive criteria for judging attainment in this more specific writing context. Only a small
Page 3 of 4
minority summoned the effort voluntarily to conduct a post-workshop assignment task for this
event, with just extra practice and feedback on offer if they did. Nonetheless, the participants in
the first presentation workshop engaged in their subsequent written task with some relish.
There is plenty of support for grading assignments from some, not just to serve as a summative
assessment but also useful to know formatively. But from others there is also a stance, coming
through most strongly with some of the presentation workshop participants, that this would be
insensitive, embodied in what we might classify as the ‘romantic’ narrative emerging from the
project conversations. The 'romantic' view, in sum, is that the ECAs, as full or apprentice
professionals, are, or ought to be, voluntarily submitting themselves to the activity for the love of
knowledge and learning, not to be then reduced to one of the two crude categories of ‘good
enough’ or ‘failure’.
There are separate documents for the literature and resources review, for the surveys analysis
summary, for detailed summaries of ECAs’ responses, and a report on the contributions of
providers (Gough 2009).
Conclusions and Implications
By way of answering research question (6), the exploratory nature of the project has allowed for
clarification of questions and fruitful lines of enquiry. In particular, the model of the Intervention
would usefully be the basis of a much larger, systematically conducted and comprehensively
national or international follow-up project. Thus far, two contrasting themes emerge. Even
though we may treat with a particularly strong critical stance the views of novice academics,
concerned as they are with learning the practical basics, we may still heed the caveat it is
important that the tail of summative assessment, where that is defined according to the needs of
accredited or award-bearing provision, does not wag the dog of learning through professionally
committed purposive participation. At the same time, there is a strong indication that the selective
integration of tasks into non-award-bearing development provision, as means by which the
performance and competence of participants may be assessed, would, by making provision more
robust and enhancing participant learning, constitute a fruitful investment of resources.
References
Greenwood, M., & Wilson, P. (2004). Recognising and Recording Progress and Achievement in
Non-Accredited Learning, Leicester: NIACE
Gough, M. (2009). Evaluating the Impact of Newer Researcher Training & Development: Which
Direction Forward? International Journal of Researcher Development, 1 (2).
Kent, I., & Guzkowska, M. (1998). Developing keys skills through assessment, chapter 3 in Pat
Cryer (ed.) Developing Postgraduates’ Key Skills (series 1 no.3 of the SRHE Guides, Issues in
Postgraduate Supervision, Teaching and Management), London: Society for Research into Higher
Education & THES
Kirkpatrick, D. L., & Kirkpatrick, J. D. (2006). Evaluating Training Programmes (3rd Ed.).
Berrett-Koehler Publishers Inc.
Vitae Rugby Team. (2008, September). The Rugby Team Impact Framework. Cambridge: CRAC
Ltd. http://www.vitae.ac.uk/rugbyteam
Page 4 of 4
Download