Brief No: RCB05 July 2001 EVALUATING THE BENEFITS OF LIFELONG LEARNING : A FRAMEWORK Ian Plewis and John Preston Centre for Research on the Wider Benefits of Learning Introduction This report sets out options for the evaluation of current and proposed DfES Lifelong Learning (LLL) interventions, with particular reference to the White Paper ‘Learning To Succeed’ (DfEE, 1999). A major challenge for evaluation of these initiatives is not only the choice of appropriate techniques, but how separate project evaluations might form a comprehensive assessment of lifelong policy as a whole. Key findings Evaluation of LLL must be seen as a planned activity, built into interventions from the outset, and timed in such a way that appropriate designs can be used and appropriate data analysis carried out. In particular, the target population should be clearly defined. Appropriate design and controls should be employed. Impact can only be evaluated by making comparisons with those not receiving the intervention and randomisation is a powerful way of eliminating some of the profound difficulties posed by self-selection. Interventions aim to affect a range of outcomes at different levels such as the individual, their family and the community. When this is the case, these levels must be sampled. The range of outcomes measured will often lead to the use of advanced statistical methods such as multilevel modelling. In assessing these outcomes individual level data are nearly always essential. Evaluation should be a public activity, and the results should be shared amongst all interested parties and the wider public. It is also a collaborative activity – both between evaluators and stakeholders and also across evaluations of different but related interventions. Evaluations need to address the issues of costs and benefits, but also be aware of the limitations of cost-benefit analysis (CBA). Qualitative methods of evaluation may usefully contextualise results from quantitative methods of evaluation such as CBA. Background The evaluation is the second publication in the series from the Centre for Research on the Wider Benefits of Learning, DfES funded research centre based jointly at the Institute of Education and Birkbeck College, London. The purpose of the Centre is to conduct research in the relatively new field of the non-economic benefits of learning such as health, crime, social cohesion and life transitions. Therefore, in this evaluation framework, the assessment of both economic and non-economic benefits of LLL are considered. However, the possibility of qualification and attaching a monetary value to wider benefits is stressed throughout the report. Evaluation, modelling and monitoring It is important to distinguish evaluation from monitoring and modelling. Where provision is universal, such as the National Curriculum, then monitoring of individual and institutional performance is generally the most suitable activity. For many LLL activities, where there may be selfselection, modelling is more appropriate. Evaluation is best suited to those activities where exposure to the intervention is restricted or differentiated. For example, initiatives introduced in experimental or pilot forms such as the Education Maintenance Allowances. The report makes reference to some techniques for monitoring and modelling in addition to evaluation. Implementation, impact and learning outcomes The need for evaluations to cover both implementation and impact is stressed. Particularly, evaluations need to refer to theories of change – whether an intervention leads to changes, how large the changes are, whether they are uniform across groups and how and why the changes took place. The outcomes of LLL may be classified as intermediate (relating to institutional performance and targets) and final (relating to outcomes for learners, communities, families, communities, organisations and the economy as a whole). The report discusses appropriate indicators and monitoring procedures for these outcomes in the current policy context. A possible scheme for considering how evaluations of proposed LLL initiatives may relate to each other and to policy targets is provided. Evaluation techniques A number of options for evaluating lifelong learning interventions are possible. In terms of implementation evaluation, qualitative techniques, program and systems theories and business process approaches are examined. These techniques are useful in explaining differences between implementation and impact. Statistical techniques for evaluating impact are discussed with reference to selecting controls, sampling and statistical modelling. The multilevel approach to modelling is discussed, which is particularly applicable when variability between families and communities in the outcomes of LLL is of interest. The use of CBA in estimating both the economic and noneconomic benefits is discussed. There has been considerable progress in recent years in using CBA to ascertain the value of the non-economic contributions of learning and illustrations are provided in the areas of quantifying the benefits of family learning, crime reduction and health. The report shows how evaluation would relate to hypothetical policy initiatives analogous to those proposed in the White Paper. Evaluating The Benefits of Lifelong Learning: A Framework Ian Plewis and John Preston July 2001 085473 656 5 £9.95 Order from: The Bookshop at the Institute of Education 20 Bedford Way, London WC1H OAL Phone: 020 7612 6050; Fax: 020 7612 6407 Email: bmbc@ioe.ac.uk