1
Science and innovation initiatives should be monitored and evaluated regularly to achieve accountability and performance objectives.
There are common principles that should be considered in the evaluation of a science or innovation initiative. This Best Practice Guide describes these principles. It is intended to provide guidance on the development, monitoring and conduct of a science or innovation evaluation rather than mandate agency evaluation methodologies and activities.
The principles can be applied for science and innovation initiatives at the program, organisation and system levels. Feedback through these processes is required to understand and improve the impact of science and innovation systems.
Innovation encompasses a wide range of activities and can be radical and disruptive or incremental. It is about making a change or doing something in a new way. Innovation is the key to improving the competitiveness and productivity of Australian industries and thus enhances standards of living and social welfare in many different ways. A high capacity for innovation allows us to experiment and adapt to change, creating a more resilient economy and society. Innovation can build capability in all areas of an organisation from market and financial performance to environmental, social and economic performance.
Individual innovations need not always be immediately successful to have impact. Trial and error – learning what not to do – is an important part of getting it right in the long run. Innovation is about market experimentation, the implication being that failure comes with the territory. Yet innovations can be so successful that as they create new markets, or revolutionise existing markets, they can in turn sweep away entire economic sectors or transform communities in their wake. This is what makes innovation so important to understand and to measure.
The science and innovation process is complex and involves risk. Innovation outcomes can be hard to measure, and there is often a shift in focus from the initial outcome, which can provide valuable insight for future initiatives.
Therefore undertaking evaluation of science and innovation initiatives is necessary in order to:
help understand whether intended objectives have been met and how to improve initiative performance;
2
assess if the initiative is integrated with the overall science, innovation and socio-economic objectives of the organisation; and
identify, calculate and communicate initiative costs and benefits.
It is recognised that traditional economic cost benefit analysis on its own may not be an adequate evaluation methodology for all science and innovation initiatives due to the range of short and long term outcomes being sought – for example, knowledge transfer and growth, training, international cooperation, or better services.
The following criteria apply when undertaking an evaluation of a science and innovation initiative. Evidence must be used to demonstrate whether or not the initiative was the most appropriate, efficient and effective way to achieve the desired outcomes and objectives.
Effectiveness – examining the impact of the initiative, including what would have happened in its absence.
Evaluation of an initiative’s effectiveness should focus on: o ensuring the initiative has clear objectives that are consistent with the initial policy intent; o effectiveness in achieving these objectives in a way that represents value for money, acknowledging the level of risk and potential failure associated with innovation and science activities; and o whether the initiative actually achieved stakeholder satisfaction.
Efficiency
– examining whether the initiative was a cost effective solution to the policy issue and has delivered benefits to society in excess of the costs.
Evaluation of an initiative’s efficiency should consider: o whether operational activities are conducted efficiently; o the effects of the initiative on stakeholders and/or on the market
(i.e. costs and prices of goods and services); and o whether cost recovery is appropriate.
The nature of science and innovation initiatives and time lags between delivery of output and realisation of outcomes can make it challenging to fully explore the economic, social or environmental impacts of an initiative. Any underlying assumptions that have been made in estimating long term outcomes should be clearly articulated. This may include assumptions made to account for qualitative outcomes of science and innovation initiatives that can not be quantified.
3
Appropriateness – examining an initiative’s appropriateness should consider: o whether it is directed to areas where there is a role for the organisation to fill a gap in the market. A gap may be either a social inequity, a market failure or other areas of research that it is in the public interest to support; o whether the program was undertaken by the appropriate jurisdiction and where required agencies/stakeholders are able to work together effectively to deliver the desired objectives; and o whether the original justification for the initiative is still relevant.
Strategic Policy Alignment – evaluation of an initiative’s relevance should consider: o whether it is consistent with the desired strategic long term policy priorities, for example the national innovation or research priorities. While policy priorities will change over time, publically funded initiatives or evaluations should address government priorities at the time the initiative was established; and o the extent to which innovation and science initiative outcomes impact on policy enhancement and future policy development.
Performance Assessment – evaluation of the performance of science and innovation initiatives should consider: o the extent to which the initiative has incorporated mechanisms for robust performance assessment and measurement of the science and innovation outcomes; o whether key performance indicators adequately capture measures of research excellence; o the extent to which research excellence drives policy enhancement and future policy development; and o whether key performance indicators are comparable across science and innovation initiatives.
4
The following five principles guide evaluation practice for science and innovation initiatives. Using a set of common principles will ensure the evaluation is robust, efficient and disseminated effectively.
Key Principles Figure 1
5
The evaluation strategy should be created during the initiative design phase. The strategy should incorporate performance measures or key performance indicators as well as the data to be collected throughout the life of a program and, often, beyond.
Preparing the evaluation strategy at this early stage overcomes difficult and costly data collection after the event. It is important not to impose onerous data collection requirements on program recipients or clients – be selective and, where possible, use innovative measures that incorporate new or improved approaches to evaluation.
Articulating a clear ‘program logic’ or ‘theory of change’ will make it much easier to plan an appropriate evaluation strategy and to identify supporting data requirements. Program logic explains the strategy/logic by which the program is expected to achieve its objectives.
It is also important to understand from the outset just how the evaluation will be used.
The evaluation process and results should facilitate learning and initiative improvement. This should be considered both at the program level and also whether the data collected can be compared across initiatives. It should not be seen as something t o be ‘added on’ towards the end of the program’s lifecycle.
Regular review and monitoring is essential to an integrated approach to program management.
This will be possible if the evaluation has been ‘built in’ to the design of the program.
Regular questioning throughout the life of the program is recommended, for example: o What are the initiative strengths? o What are the initiative weaknesses? o Is it delivering the expected benefits? o Is it still relevant? Has something in the environment changed? o Should funding continue or is there a better alternative?
This review and monitoring should also have a future focus: o Are there new opportunities that could be exploited to improve the program, and its impact on clients? o Are there any threats to the continued existence and the success of the program? o What are long term benefits?
Client feedback at various stages of implementation may also provide valuable information on program performance and/or hints for future directions.
Progress against identified key performance indicators (KPIs) and performance measures is required. Ensure that performance measures
6
are tied to program objectives and that key performance indicators are not merely reporting program activity.
For science and innovation initiatives review and monitoring should consider additionality benefits. This includes whether additional desired outcomes are achieved over and above the outcomes that would have been achieved without government intervention.
The evaluation process should be objective and transparent. It is desirable, but not always possible or practical, to have the assistance of an independent body to conduct major evaluations.
Participatory, self or peer evaluation may be appropriate. Consider involving outsiders in the evaluation to gain an alternative view, or use an independent reference group to oversight the evaluation.
For government initiatives, where appropriate involve central agencies in the process from the beginning, especially when evaluating larger or high profile programs.
Regardless of the options chosen, it is crucial to ensure that governance arrangements are sound. Legislation may also have an impact on governance arrangements and evaluation requirements.
Where the delivery agency plays a role in the evaluation process, all parties should be aware of the issues of conflict of interest that can arise.
Where possible, the delivery agency should not change a final evaluation report, except to correct errors of fact.
There is no ‘one best way’ to conduct an evaluation. No single model will suit all programs. There are a number of approaches that may be used.
The methodology being used depends on the objectives and nature of the program being evaluated. It is important to have a clear understanding from the outset of the program’s objectives, including any societal and environmental objectives. It is also important to be clear about what issues the evaluation is intended to inform, and how the evaluation report/results will be used.
The long term risky nature of science and innovation projects must be considered in developing and applying a methodology. This could include an evaluation of such things as long term spillovers, additionality and behavioural changes. Often these benefits or adverse impacts cannot be fully accounted for during the period of the evaluation but will be realised at a future time.
Irrespective of methodology, the evaluation needs to be systematic, robust and evidence-based. This will ensure that data collected at an individual initiative level can be compared across programs.
7
The needs of different stakeholders (e.g. program clients, program implementers, decision makers etc) should be considered in developing a communication strategy to support the evaluation strategy.
The communication strategy should be developed in conjunction with the evaluation strategy. This will help facilitate stakeholder engagement with the evaluation process and ensure results and lessons learnt are disseminated and considered in future design.
It is important for departments and agencies to disseminate lessons learnt from evaluations. In particular, ‘things that did not work’ are more likely to be repeated if poor results are delayed, not released, or sanitised. Ways to share learnings should be included in the communication strategy.
Evaluators must be prepared to report to a decision-maker in a way that informs the context of their decision. The program review or evaluation should lead to improved outcomes in the future.
Communication of the evaluation results should encompass the scientific value of the work. For example while a research program may not have lead to any specific breakthroughs it may have allowed future work to be better focused and have extended the depth of knowledge in an area.
The program evaluation should be in a format that is easy for the intended recipients to understand and use.
The Coordination Committee on Innovation Secretariat has compiled a number of best practice evaluation references to provide further guidance and advice on the evaluation principles.
These resources are available through the Evaluation Working Group. For access, email secretariat.innovationsystems@innovation.gov.au
.
8