UQ325377OA - UQ eSpace

advertisement
POST PRINT VERSION.
Accepted by Journal of Sociology on 20 August 2013.
*Note – this is a copy of the final draft version submitted on 8 June 2013 after peer review.
Journal of Sociology * November 25 2013
http://jos.sagepub.com/content/early/2013/11/25/1440783313505008
DOI: 10.1177/1440783313505008
Author details:
Corresponding author: Adrian Cherney
*Dr Adrian Cherney
School of Social Science
The University of Queensland
Brisbane, St Lucia 4072
ph + 61 7 3365 3236
fax + 61 7 3365 1544
email: a.cherney@uq.edu.au
Professor Brian Head
Dr Jenny Povey
Professor Paul Boreham
Michele Ferguson
Institute for Social Science Research
The University of Queensland
Brisbane, St Lucia.
Acknowledgements: This project is supported through ARC Linkage project: LP100100380.
1
The utilisation of social science research - the perspectives of academic
researchers in Australia
Abstract
This article reports results from a survey of academic social scientists in Australian
universities on reported levels of research utilisation by non-academic users. Using the scale
of research utilisation we examine levels of research impact and explore a range of variables
to understand factors influencing the use of academic social science research by policymakers and practitioners. The results indicate that research uptake is enhanced through
mechanisms that improve research transfer and the intensity of interactions between
academic researchers and users. Our findings provide insights into how the impact of social
science research can be improved and draws attention to factors that need to be considered in
efforts to evaluate and enhance the impact of academic social research.
Keywords
applied sociology, research impact, research utilisation, social science research
Introduction
There has been increasing pressure on universities to demonstrate the impact of the research
they produce. This has involved demonstrating impact through quality measures such as
citation counts and journal impact factors. National examples of these assessment schemes
include the Research Excellence Framework (REF) in the UK, the Excellence in Research for
Australia initiative, the Performance Based Research Fund in New Zealand and other similar
initiatives operating in parts of Europe and the US (Geuna and Martin, 2003; Lewis and Ross,
2011). Recently, University Research assessment exercises such as the REF have expanded to
encompass broader measures of research impact to include the economic, social, cultural and
environmental benefits arising from University research.
When it comes to the social sciences, the issue of measuring research impact has been
regarded as especially challenging, with many analysts claiming that such assessment needs
to involve both quantitative and qualitative indicators (Donovan, 2011; Smith et al., 2011).
Pinpointing the policy and practice impacts of academic social research is difficult, because
the uptake of social research knowledge rarely follows a linear path from academic
2
knowledge producers to end-users1 in the fields of policy and practice (Molas-Gallart et al.,
2000; Weiss, 1980). Greater collaboration between academic research-producers and the
users or consumers of research is increasingly seen as a way of improving this knowledge
flow, but even in such collaborative contexts an improved research uptake is not guaranteed.
The conclusion from most studies is that closer collaboration is insufficient to ensure that
academic social research has a discernable impact on policy or practice, and that a range of
variables influence the uptake and use of academic social research by non-academic endusers (Belkhodja et al., 2007; Bogenschneider and Corbett, 2010; Cherney and McGee, 2011;
Huberman, 1990; Landry et al., 2001a; 2001b; Oh and Rich, 1996; Weiss and Bucuvalas,
1980).
The consideration of social research impact, and the benefits of closer synergies
between academic knowledge producers and end-users in generating research uptake, raises
important issues concerning the value and orientation of social research and its contribution
to evidence-based policy (Head 2010). If one takes the position that social science research
offers tangible social benefits (beyond the pursuit of knowledge as a value in its own right),
identifying specific factors that influence research utilisation becomes a key concern. In this
paper we examine these issues from the perspective of social science knowledge producers –
academic researchers in the social science disciplines. While such a study has some
limitations (which will be addressed in the discussion), a full understanding of the process of
knowledge transfer and uptake requires an examination of the experiences and perspectives
of both knowledge producers and users. Using survey data from a study of university-based
social science researchers in Australia who have engaged in research collaborations, the
paper principally aims to examine factors that influence perceptions of the policy uptake of
social research. We use a similar approach to that adopted in the study by Landry, Amara and
Through out this this paper we use the term “end-user” in a generic sense to refer to non-academic audiences
e.g. policy personnel, or practitioners/ managers within the public, private or community sectors.
1
3
Lamari (2001a; 2001b) and draw on the Knott and Wildavsky (1980) research-use scale to
examine factors that appear to influence reported levels of research impact. We do not
examine knowledge utilisation rates between specific social science disciplines (see – XXX
for a comparison across sociology, education, psychology, economics and political science authors details removed). While we recognise that different scientific disciplines may
generate diverse forms of utilisation, given that the various contexts in which knowledge is
produced and processed vary across research domains, our aim here is to generate broader
generalisations about knowledge utilisation in the social sciences. Hence, we have examined
research utilisation across the social sciences as a grouping. We have done this because there
is a dearth of empirical studies on research utilisation that move beyond specific qualitative
case studies, noting that such studies potentially lack external validity or statistical reliability
(Landry et al, 2001a; 2001b; Rich and Oh, 2000).
The paper is organised as follows. Firstly, the explanatory model underpinning this
study (i.e. the ladder of utilisation) will be discussed. Secondly, the paper outlines the data
collection methods used for a recent survey administered to Australian academics as part of
our current study. Thirdly, the various dependent and independent measures are described
and we outline why – contrary to other studies - we have adopted an index to measure
research use for our dependent variable. Fourthly, key results from our sample of social
researchers are provided, focusing on reported levels of research utilisation and variables that
appear to influence knowledge transfer and application. Finally, the paper discusses the
results and the limitations of the study and concludes with observations about the assessment
of knowledge utilisation and translation in the social sciences.
Literature review
Measuring Research Utilisation
4
There is no accepted model for measuring research utilisation (Lester, 1993; Oh and Rich,
1996; Smith et al., 2011). A common distinction made in the literature is to differentiate
between conceptual, instrumental, symbolic and political use of research (Nutley et al.,
2007). Conceptual impact refers to research being used to change understanding about an
issue or choice, instrumental use constitutes direct impact on policy or practice, symbolic use
of research refers to the tactic of policy-makers using research to delay action, while political
use is regarded as more strategic in nature, such as when research is adopted to legitimise or
justify a pre-existing position (Monaghan, 2011; Nutley et al., 2007; Weiss, 1979). While
these dimensions provide insight into the various ways in which social research can be
adopted or can have an impact on policy decision-making, they do not adequately capture the
types of activities that characterise different stages of knowledge utilisation – that is, they
overlook the breadth of social research usages, ranging from practices that encompass the
transmission of ideas through to actual applications in practice (Cherney and McGee, 2011;
Henry and Mark, 2003). This is where more detailed scales of research use (or staged
models) have become helpful in understanding that utilisation is related to various decisionmaking processes, particularly concerning the actions of the producers and consumers of
social research.
In this study we replicated a modified version of the Knott and Wildavsky (1980)
research-use (RU) scale, similar to that adopted in Landry, Amara and Lamari (2001a;
2001b). There are a number of knowledge utilisation scales available in the literature (e.g.
Estabrooks, 1999; Larsen, 1982). However, we have adopted a similar scale to that of Knott
and Wildavsky (1980) because it has been frequently cited in the literature, has been used to
measure research use among government officials and academics, and has been shown to be
reliable (Cherney and McGee, 2011; de Goede et al., 2011; Landry et al., 2001a; 2001b;
Lester 1993). Conceptually, the research use scale can be referred to as a “ladder of
utilisation” and Table 1 provides the descriptions for each stage of research use (or rung of
5
the ladder), as presented in our questionnaire to Australian social scientists. The benefit of
this scale is that it operationalises research use as a cumulative process that may progress
through a number of stages: transmission, cognition, reference, effort, influence and
application. The scale is cumulative in the sense that cognition builds on transmission,
reference builds on cognition, effort on reference, influence on effort, and application on
influence. The RU scale has been criticised as perpetuating a linear understanding of research
utilisation because it implies that each stage, or “rung of the ladder”, must be sequentially
navigated to generate increasing forms of research uptake. Scholars have argued that this
ignores the non-linear, or indirect, pathways through which research influences policy
decisions (e.g., see Davies and Nutley, 2008). However, the RU scale does recognise the fact
that the research utilisation process varies between a range of activities spanning knowledge
transfer, translation and uptake. The scale also provides opportunities to measure the
significance of factors that distinguish different levels of research utilisation (Cherney and
McGee, 2011; Knott and Wildavsky, 1980; Lester, 1993).
<Insert Table 1>
Independent Variables Influencing Research Use
Just as there is no agreed conceptual model relating to research utilisation, there is no
definitive list of variables developed to help predict knowledge use (Lester, 1993). Most
studies have categorised variables under broad headings relating to supply-side and demandpull factors, as well as dissemination and interaction variables. Supply-side factors include
research outputs and the context in which the researcher works. These can include the types
of research outputs produced by academics (e.g. qualitative or quantitative studies), whether
research is focused on non-academic users, the importance of internal or external funding
sources, and the institutional drivers that influence the initiation of collaborations with
external partners and end-users (Bogenschneider and Corbett, 2010; XXX author details
removed). Barriers that researchers encounter within their own institutional settings such as
6
academic reward systems (e.g. incentives to publish in A star journals) also influence the
production and supply of research to external agencies (Jacobson et al., 2004). Demand-pull
factors relate to the end-user context, including whether end-users consider research to be
pertinent, whether research coincides with end-users’ needs, the priority users place on the
quality of the research, and the feasibility of adopting research recommendations. Added to
this are organisational processes such as the value policy-makers and practitioners place on
research evidence and their level of skills to interpret and apply research knowledge. Such
factors influence the demand for academic research within end-user organisations (Belkhodja
et al., 2007; Ouimet et al., 2009). Dissemination variables are concerned with efforts to adapt
and tailor research products (e.g. reports) for users and to develop strategies focused on communicating research results (Huberman, 1990). The more researchers invest in adaptation
and dissemination, the more likely research-based knowledge will be adopted. Adaptation
includes efforts to make reports more readable and easier to understand, efforts to make
conclusions and recommendations more specific or more operational, efforts to focus on
variables amenable to interventions by users, and efforts to make reports appealing (Cherney
and McGee, 2011). Dissemination efforts include strategies aimed at communicating research
to targeted end-users, such as when researchers use different social media to communicate
their research messages, hold meetings to discuss the scope and results of their projects with
specific users or partners, and target particular forums, e.g. reporting on their research to
government committees. Finally, interaction variables focus on the intensity of the
relationship between knowledge producers and potential users or beneficiaries of research.
These types of factors include informal personal contacts and networks between researchers
and potential end-users, participation in committees, or experience with research
partnerships, such as the number of research partnerships in which an academic has engaged.
The argument is that the more intense are these linkages, the more likely research uptake will
occur (Huberman, 1990; Lomas, 2000).
7
The Current Study
The data presented here are drawn from an Australian Research Council funded project with
nine industry partners2. The project involved 4 phases: (1) a targeted survey of Australian
social scientists; (2) a targeted survey of policy personnel; (3) interviews with a selection of
academic respondents; and (4) interviews with a selection of policy personnel. Results
reported here are based on phase 1 data. The survey administered to academic social
scientists was partially based on existing questions and scales (e.g. Bogenschneider and
Corbett, 2010; Landry et al., 2001a; 2001b). New questions were also developed to capture
additional data relating to the benefits and problems of engaging in research collaborations.
The survey was first piloted among Fellows of the Academy of the Social Sciences in
Australia (ASSA) in September-October 2010. It is estimated that nearly 500 members were
sent the survey and 81 surveys were completed, with a response rate of about 17 percent.
There were no significant changes to the survey following the pilot other than minor editing
of some lead-in questions to make them clearer. No scales were changed. We have reported
the combined results from the same questions used in both the pilot and main survey. A
database was established of Australian academics who had secured at least one Australian
Research Council (ARC) grant (what are termed Discovery or Linkage grants3) between 2001
and 2010 within the field of social and behavioural science. The selection of relevant
disciplines was based upon the ‘field of research’ codes used by the ARC to categorise the
funded projects, and comprised codes relating to anthropology, criminology and law
2
These partners provided in-kind and cash support for this project. They include the Productivity Commission;
Australian Bureau of Statistics; Queensland Health; Queensland Communities; Queensland Department of
Employment; Queensland Department of Premier and Cabinet; Victorian Department of Planning and
Community Development; Victorian Department of Education & Early Childhood; and the Victorian
Department of Human Services.
3
Australian Research Council (ARC) grants are national competitive grants and fund a significant proportion of
research activity in Australian Universities. Discovery grants fund fundamental research that may not have an
immediate applied focus, but it is assumed to have some broader community benefit. Linkage grants fund
research collaborations between academic chief investigators and industry partners (including government
agencies). Industry partners are required to make a cash and in-kind contribution to the project (see
http://www.arc.gov.au/ncgp/default.htm). These ARC grant programs place significant emphasis on the track
record of applicants, with 40% of ARC Discovery assessment based on track record.
8
enforcement, human geography, political science, policy and administration, demography,
social work, sociology, other studies in human society, psychology, education and
economics. Using this database, a web link to the survey was sent via email to 1,950
academic researchers between November 2010 and February 2011. The same reminder email
was sent twice during this period and the survey closed in May 2011. A total of 612
completed surveys were received, which constitutes a response rate of 32 percent. When the
main academic survey was combined with the ASSA pilot, the final total included 693
responses. The response rate achieved is indicative of the difficulty of encouraging time-poor
academics to participate in projects where they themselves are the subjects of the research. It
may also be noted that web-based surveys often suffer from low response rates (Sue, 2007).
The reason for targeting academics who had secured research grants was to ensure the
project captured experienced academics who were likely to have had a history of research
collaborations, since one aim was to understand the impact and dynamics of such
partnerships. Studies have also shown that seniority and the number of external competitive
research grants are key factors influencing higher levels of engagement with non-academic
end-users and increased levels of research impact (Cherney and McGee, 2011; Landry et al,
2001a; 2001b).
Participants’ background
Most respondents to the survey were drawn from senior academic positions, with the data
skewed towards academics Level D and above i.e. Associate Professors / Readers and
Professors. As Figure 1 indicates, over 40 percent of the sample was at the level of Professor,
followed by level D and C. The dominance of senior academics is a result of the recruitment
strategy used to generate the sample given it targeted academics who had an established
profile (e.g. ASSA members) and a history of securing national competitive grants, the award
of which tends to be strongly influenced by track record (i.e. ARC grants). This does raise the
9
issue of whether our sample is biased and its effect on reported levels of research use, which
we address in the data analysis with the inclusion of particular control variables in our model.
<Insert Figure 1>
Respondents were mainly drawn from academics who occupied teaching-and-research
positions compared to research only roles (65 percent compared to 35 percent respectively).
Respondents were asked their disciplinary background, and these responses are outlined in
Figure 2.
<Insert Figure 2>
Dependent variable
Knowledge utilisation was measured using a validated version of the Knott and Wildavsky
(1980) research use scale, similar to that adopted in the study by Landry et al. (2001a;
2001b). As indicated previously, the scale is based on six stages namely: transmission,
cognition, reference, effort, influence, and application. For each of these six echelons (or
stages) of the research utilisation ladder, respondents were asked to estimate what had
become of their research using a 5-point scale ranging from 1 (never), 2 (rarely), 3
(sometimes), 4 (usually), to 5 (always) utilised in some way.
Previous researchers (Cherney and McGee, 2011; Landry et al., 2001a; 2001b) have
used this scale cumulatively (with each stage building upon the next) and assigned a value of
1 when respondents replied always, usually, or sometimes, and with all other responses
assigned the value of 0, which means they ‘failed’ to move up the scale. There are two ways
that this cumulative approach can be analysed. The first is to run a separate logistic regression
for each stage of research utilisation as Landry et al. (2001b) did in their study. Hence
respondents who pass all six stages would be represented in each stage or regression model
(see Figure 3). This is particularly problematic with our sample, because a large proportion
(66 percent) of the sample reported they passed all six stages. Hence the question arises
whether such a method would really be determining what predicts movement from one stage
10
to the next, or whether progression across each stage is masked by the dominant group. In
order to address this criticism, a second approach would be to create an ordinal variable with
seven levels, including in each level only those individuals who ‘passed’ that level. Thus
respondents in each level would be unique. Table 2 presents the number of respondents
categorised in each level/echelon according to such progression criteria. For instance, four
percent of the sample passed the transmission stage but did not progress further. However, an
ordinal logistic regression analysis is not possible due to our small sample size and the
number of cases in each level.
The next possible option would be to examine whether these stages are in fact
exclusive. Does a negative experience (‘failure’) in one stage preclude academic researchers
from progressing to other stages? Or should these stages comprise an index? Descriptive
statistics, as presented in Figure 4, illustrate that failure in one stage does not preclude
academic researchers from passing subsequent stages. This is an important consideration
owing to the criticism that the research-use scale perpetuates a linear conceptualisation of
research utilisation; indeed, the data in Figure 4 indicates that one does not necessarily have
to traverse in sequence each rung of the research utilisation ladder to reach the ultimate stage,
i.e. ‘application’ of research findings by users - hence the research-use ladder is not
necessarily cumulative. A factor analysis of the items (or stages), revealed a 1-factor solution
and a Cronbach’s alpha coefficient of 0.91 (see Table 3). The results indicate that these items
are measuring one construct and that the index seems to be reliable. Unlike the previous
studies outlined above, in this study it was decided to use the items as an index to measure
research use given the results of the factor analysis. This was also decided given the criticism
made of the RU scale relating to assumptions about the process of knowledge diffusion. A
mean index score was calculated for all six stages. The mean score for the research utilisation
index is presented in Table 4.
<Insert Figure 3>
11
<Insert Table 2>
<Insert Figure 4>
Independent variables
A number of indices were created and included in our model as independent variables. The
items used in each index were determined by factor analyses, with each index comprising a 1factor solution. The Cronbach’s alpha coefficients for these independent variables are
presented in Table 3 and detailed descriptions of index compositions can be found at - XXX
2012a; and XXX 2012b – authors details removed.
<Insert Table 3>
Descriptive statistics for each independent variable are presented in Table 4. Academic
researchers indicated that academic funding (i.e. national competitive grants such as
Australian Research Council grants, and internal University funds) were more important than
funding from government and non-government agencies in ensuring their research is
conducted. Academic researchers indicated that the ‘useability’ of the research is given a
higher priority by end-users, compared to other features such as the quality or feasibility of
research. A very high level of importance is accorded by academic researchers to the use of
refereed publications as a method through which to disseminate their research, followed by
the importance of tailoring research to meet the needs of end-users. Table 4 also illustrates a
high level of agreement among academic researchers concerning the fact that they encounter
barriers in the transfer and uptake of their research. The average number of research partners
with whom researchers engaged was 9 research partners per researcher. The number of grants
received by these academic researchers varies between 0 and 67, with the average researcher
having received 9 grants.
Control variables
12
A number of control variables were included in the model to control for the personal
characteristics of the respondents, namely disciplinary background, home institution and
position type. There were five main disciplines as depicted in Figure 2 and the remaining
disciplines were categorised as other. University affiliation comprised researchers affiliated
with Go8 universities and those who were not (coded as 1 and 0 respectively). As presented
in Table 4, the sample was equally represented in both groups. Position type (academic level)
comprised two groups, Levels D and E academics and Levels A to C academics (coded as 1
and 0 respectively). As mentioned above, position type comprised teaching and research or
research only positions.
<Insert Table 4>
Data analysis
Given that our dependent variable is approximately continuous, a multiple linear regression
model was used to estimate the associations between research utilisation (our dependent
variable) and a number of explanatory variables such as benefits and barriers associated with
engaging in research with policy-makers and practitioners. As a preliminary check, we
examined the correlations between all variables in the model. They ranged between .001 and
.74, suggesting that multicollinearity was unlikely to be a problem (the correlation matrix was
too large to depict in an Appendix). This was confirmed by a relatively low value of the mean
Variance Inflation Factor (VIF) of 1.72, with the individual variables’ VIFs ranging from
1.15 to 3.32. The four highest correlations were between importance of meetings and
dissemination activities with end-users and importance of tailoring research when end-users
are the focus (0.74); problems relating to the orientation of research partnerships and
‘consequences’ of investing in research partnerships (0.73); importance of using contacts,
seminars and reports to present research to policy-makers and practitioners and importance of
meetings and dissemination activities with end-users (0.71); and importance of using
13
contacts, seminars and reports to present research to policy-makers and importance of
tailoring research when end-users are the focus (0.68). All four correlations were statistically
significant.
Regression Results
The regression results are presented in Table 5. The results indicate that thirteen variables
were significantly related to the reported utilisation of academic research. The more the
research is targeted to end-users, the more likely academic researchers report research
utilisation. The more beneficial the collaboration to academic researchers, the more likely
they report utilisation. Academic researchers reported that, when end-users felt that research
was of high quality and useable, it was more likely to lead to utilisation. However, academic
researchers indicated that the more policy-makers or practitioners prioritised the ‘feasibility’
of research the less likely were they to perceive that end-users would use academic research.
The importance of tailoring research for end-users is positively and significantly associated
with reported levels of research use. The importance of meetings and dissemination activities
with end-users and using media to disseminate findings is associated with reported levels of
research uptake. Using interactions such as contacts, seminars and reports to present research
to end-users is also associated with reported levels of research impact. As the number of
grants increases so does the likelihood of research utilisation. In addition, it appears that the
adoption of quantitative approaches increases the likelihood of research uptake for those in
our sample. However, despite our sample indicating that academic funding was important,
there was a negative relationship found between types of funding sources and reported
research utilisation by end-users. Even with the inclusion of our control variables (position
type and level, university affiliation and disciplinary background) the model remained stable,
strengthening the overall generalisability of our results.
14
<Insert Table 5>
Discussion
We recognise there are limitations in asking academics to comment on the impact of the
research they produce and levels of knowledge utilisation. It can be argued that respondents
were required to make retrospective judgements of processes they may not have directly
observed. Such self-report data can also reflect social desirability biases in that academics
may tend to inflate the relevance of their research. There is considerable merit in the
argument that tracing the impact of research evidence needs to be complemented by more
detailed qualitative case study work. The survey data only gives insight into broad patterns of
knowledge utilisation as reported by our academic sample – hence providing one possible
perspective. We propose, however, that while our sample is biased towards senior academics,
the overall seniority and experience of our survey respondents place them in a strong position
to make judgments about research utilisation by end-users, compared to academics with less
experience of working with external partners.
What do our results indicate about the transmission and uptake of research-based
knowledge in the social sciences? From our data analysis it is clear that active efforts by
academic social researchers to “push out” or disseminate research products to potential
beneficiaries is central to promoting utilisation by non-academic end-users. Academic
researchers in our sample were mindful that in order for their research to have an impact on
policy or practice, there is a requirement to directly engage with users through meetings and
dissemination processes, and that it is essential to tailor research projects and findings to their
needs. While “producer push” models of research translation have often been criticised as
overly rational and linear (Nutley et al., 2007), our results demonstrate that academics who
wish to see their research utilised by policy-makers and practitioners have to commit to
strategies that help diffuse their research to end-users. To do so academics need to find
15
supporters within end-user organisations who are willing to help push research up to senior
levels and act as change agents to help promote the use of academic research by relevant
personnel (XXXX forthcoming - author details removed). According to our sample,
academics need to step outside their comfort zone of traditional forms of research
dissemination and begin engaging in alternative forms of dissemination. Hence, it is not
surprising we found a relationship between reported levels of research utilisation and the
importance accorded by our respondents to the media for presenting their research (see Table
5).
Despite accusations that academic social researchers do not understand the needs of
policy-makers or practitioners, our respondents were aware that such users do have different
priorities and perceptions when it comes to judging the relevance and use of academic social
research. This awareness may be linked to the fact that our respondents were relatively
experienced in engaging with multiple external research partners, making them mindful of
these contextual issues. Likewise, as has occurred in many countries, academic researchers in
Australian universities are under increasing pressure to demonstrate the value and impact of
the research they produce and are therefore alert to issues relating to research impact. This
awareness was potentially increasing at the time when the survey was completed, because the
first round of the Australian university research assessment exercise, the ERA, had been
conducted in 2010, and Australian universities were preparing for another round in 2012.
University research assessment exercises such as the ERA have generally focused on quality
indicators such as journal rankings, journal impact factors, and citation counts. However,
these indicators can deter academics from investing in alternative outlets for practitioner
audiences beyond the standard peer-reviewed academic journals (XXXX forthcoming).
There are a number of noteworthy results that arose from the regression analysis. One
was that the quality of research was still seen as important in influencing research utilisation
among end-users (i.e. policy-makers or practitioners), but there was recognition that quality is
16
not the only factor, with judgements about ‘feasibility’ also seen as influencing (in a negative
way) levels of research use. The ‘feasibility’ variable comprised three items: research
recommendations are seen as economically and politically feasible and research findings
support a current position or practice (see XXXX 2012b). When feasibility is seen as a strong
priority, our respondents reported that academic research is less likely to be used by policymakers. Another result was the relationship found between methodological approaches and
levels of research utilisation. While not strong, there was a positive relationship between
whether respondents reported they used quantitative approaches and subsequent utilisation.
Landry, Amara and Lamari, (2001a; 2001b) found a similar result in relation to particular
social science disciplines, as did Cherney and McGee (2011) and XXXX (forthcoming –
author details removed). It is possible that the perceived effect of quantitative methods in
influencing levels of utilisation may reflect the preferences and bias of users for particular
types of research. The perceived neutrality of quantitative research vis-a-vis the interpretative
and contestable nature of qualitative research may be attractive to policy-makers and
practitioners seeking digestible summaries of research outcomes, compared to the nuanced
and detailed nature of results typically associated with qualitative methodologies. This
association does require further investigation of the preferences of policy-makers and
practitioners for certain types of research methods and products.
The results in Table 5 showed a negative relationship between the importance of
particular types of sources to fund the undertaking of research and the likelihood of research
use. This negative relationship was more significant when it came to academic funding
sources (e.g., internal University funds and research council grants), compared to other types
of funding from government and private sector agencies. This does support a similar finding
by Landry et al. (2001b). This is perhaps the result of the fact that there could be relatively
low expectations relating to the utilisation of research that is funded through sources mainly
supporting academically oriented projects in the social sciences. However, with the
17
increasing pressure on University research funding councils such as the ARC to demonstrate
their funded projects are utilised beyond the boundaries of academia, the influence of funding
source on research uptake is a topic that requires further investigation. It may be the case that
the different types of contractual relationships that underpin particular funding sources
potentially influence forms of research translation and uptake.
Conclusion
There is little doubt that efforts to measure the impact of university-funded research have
made academics more mindful of how they translate and transfer their research to users.
Efforts by academic researchers to engage with potential users of research clearly help to
generate improved levels of uptake. While our results largely pertain to academics in the
social sciences, there is little doubt they provide broader lessons for other research fields,
particularly in understanding how the contexts and activities of academic researchers, and
their relations with research users in fields of policy and practice, shape levels of knowledge
translation and uptake. It must be recognised, though, that the decisions and choices of endusers to draw on research-based knowledge are largely beyond the control of academic
researchers. However, given the overwhelming finding that dissemination and forms of
interactions between academic researchers and end-users have a major bearing on research
utilisation, the development of improved incentives to invest in these types of activities
would clearly produce benefits in enhancing knowledge transfer and uptake. While this study
only provides insights into the perspectives of academic social researchers in Australia, it
does highlight more broadly how the impact of social science research can be improved.
18
19
References
Belkhodja, O., Amara, N., Landry, R. and Ouimet, M. (2007) ‘The extent and organisational
determinants of research utilisation in Canadian health services organisations’,
Science Communication 28(3): 377-417.
Bogenschneider, K. and Corbett, T. J. (2010) Evidence-Based Policy Making: Insights from
Policy Minded Researchers and Research-Minded Policymakers. New York:
Routledge.
XXXX forthcoming
Cherney, A. and McGee, T.R. (2011) ‘Utilisation of social science research: Results of a pilot
study among Australian sociologists and criminologists’, Journal of Sociology 47(2):
144-162.
XXXX 2012a
XXXX 2012b
XXXX forthcoming
Davies, H. and Nutley, S. (2008). Learning More about How Research-Based Knowledge
Gets Used: Guidance in the Development of New Empirical Research. Working
Paper. New York: W.T.Grant Foundation.
de Goede, J., Steenkamer, B., Treurniet, H., Putters, K. and van Oers, H. (2011) ‘Public
health knowledge utilisation by policy actors: An evaluation study in MiddenHolland, the Netherlands’, Evidence and Policy 7(1): 7-24.
Donovan, C. (2011) ‘State of the art in assessing research impact: introduction to a special
issue’, Research Evaluation 20(3): 175-179.
Estabrooks, C.A. (1999) ‘The conceptual structure of research utilisation’, Research in
Nursing and Health 22: 203-216.
Geuna, A. and Martin, B.R. (2003) University Research Evaluation and Funding: An
Interntional Comparison’, Minerva 41(4): 277-304.
Head, B.W. (2010) ‘Reconsidering Evidence-based Policy: key issues and challenges’, Policy
and Society 29(2): 77-94.
Henry, G.T. and Mark, M.M. (2003) ‘Beyond use: Understanding evaluation's influence on
attitudes and actions’, American Journal of Evaluation 24(3): 293–314.
Huberman, M. (1990) ‘Linkages between researchers and practitioners: A qualitative study’,
American Educational Research Journal 27(2): 363-391.
Jacobson, N., Butterill, D. and Paula, G. (2004) ‘Organisational Factors that Influence
University-Based Researchers’ Engagement in Knowledge Transfer Activities’,
Science Communication 25 (3): 246-259.
Knott, J. and Wildavsky, A. (1980) ‘If dissemination is the solution, what is the problem?’
Knowledge, Creation, Diffusion, Utilisation 1(4): 537-578.
Landry, R., Amara, N. and Lamari, M. (2001a) ‘Climbing the ladder of research utilisation:
Evidence from social science research’, Science Communication 22(4): 396-422.
Landry, R., Amara, N. and Lamari, M. (2001b) ‘Utilisation of social science research
knowledge in Canada’, Research Policy 30(2): 333-349.
Larsen, J.K. (1982) Information Utilisation and Non-Utilisation. Palo Alto, CA : American
Institutes for Research in the Behavioral Sciences.
Lester, J.P. (1993) ‘The utilisation of policy analysis by state agency officials’, Knowledge:
Creating, Diffusion, Utilisation 14(3): 267-290.
Lewis, J.M. and Ross, S. (2011) ‘Research funding systems in Australia, New Zealand and
the UK: policy settings and perceived effects’, Policy and Politics 39(3): 379-398.
Lomas, J. (2000) ‘Using ‘Linkage and Exchange’ to Move Research into Policy at a
Canadian Foundation’, Health Affairs 19(3): 236-240.
20
Molas-Gallart, J., Tang, P. and Morrow, S. (2000) ‘Assessing the non-academic impact of
grant-funded socio-economic research: results from a pilot study’, Evaluation
Research 9(3): 171-182.
Monaghan, M. (2011) Evidence versus politics. Exploiting research in UK drug policy
making? Bristol: Policy Press.
Nutley, S., Walter, I. and Davies, H. (2007) Using evidence: How research can inform public
services. Bristol: Policy Press.
Ouimet, M., Landry, R., Siam, S. and Bedard, P. (2009) ‘The absorption of research
knowledge by public civil servants’, Evidence and Policy 5(4): 331-350.
Oh, C.H. and Rich, R.F. (1996) ‘Explaining use of information in public policymaking’,
Knowledge and Policy 9(1): 3–35.
Rich, R.F. and Oh, C.H. (2000) ‘Rationality and use of information in policy decisions: A
search for alternatives’, Science Communication 22(2): 173-211.
Sue, V.R. (2007) Conducting Online Surveys. Los Angeles: Sage Publications.
Smith, S., Ward, V. and House, A. (2011) ‘ “Impact” in the proposals for the UK's Research
Excellence Framework: Shifting the boundaries of academic autonomy’,
Research Policy 40(10): 1369-137.
Weiss, C.H. (1979) ‘The many meanings of research utilisation’, Public Administration
Review 39(5): 426-431.
Weiss, C.H. (1980) ‘Knowledge Creep and Decision Accretion’, Science Communication
1(3): 381-404.
Weiss, C.H. and Bucuvalas, M. (1980) Social Science Research and Decision-Making. New
York: Columbia University Press.
21
Table 1. Research Utilisation Scale
Variable
Transmission
Cognition
Reference
Effort
Influence
Application
I transmit my research results to end-users
My research reports have been read and understood by end-users
My work has been cited in reports and strategies by end-users
Efforts were made to adopt the results of my research by end-users
My research results have influenced the choices and decisions of end-users
My research has been applied by end-users
Figure 1. Professional Profile
n = 693
60 %
47.6
Percentage
50 %
40 %
30 %
24.1
20.1
20 %
6.3
10 %
1.6
0.3
0%
Lecturer Level Lecturer Level
A
B
Senior
Lecturer
Level C
Assoc
Prof/Reader
Level D
Professor
Level E
Other
Figure 2. Major research discipline identified
25 %
n = 693
22.5
20 %
Percentage
15.9
15 %
14.7
13
11.3
10 %
6.5
5%
4.5
4.2
3.6
3
0.9
0%
22
Figure 3. Number of academic researchers climbing the echelons of the ladder of
knowledge utilisation – progression is subject to passing previous echelons
700
611
600
582
Number of researchers
539
487
500
465
455
400
Pass
300
Fail
200
100
82
29
0
43
52
22
10
Transmission Cognition
Reference
Effort
Influence Application
Echelons of the ladder of knowledge utilization
Table 2. Proportion of academic respondents at each stage of the research utilisation
scale
n
%
No echelon
82
11.8
Transmission
29
4.2
Cognition
43
6.2
Reference
52
7.5
Effort
22
3.2
Influence
10
1.4
Application
455
65.7
693
100.0
23
Figure 4. Number of academic researchers passing each stage of research utilisation –
failure in one stage does not preclude passing subsequent stages
700
611
Number of researchers
600
606
571
524
519
521
500
400
Pass
300
Fail
169
200
100
82
174
122
172
87
0
Transmission Cognition Reference
Effort
Influence
Echelons of the ladder of knowledge utilization
Application
24
Table 3. Internal reliability coefficients (Cronbach’s alpha) for variables
Name of variable
RU Index
Researchers’ Context
Research targeted to user
Importance of academic funding
Importance of other funding
Benefits of collaborative research
Barriers academics experience in the transfer & uptake of
their research
Consequences of investing in research partnerships
Problems relating to the orientation of research
partnerships
User’s Context
End-users prioritise high quality research
End-users prioritise the useability of the research
End-users prioritise the feasibility of the research
Dissemination
Importance of tailoring research when end-users are the
focus
Importance of meetings & dissemination activities with
end-users
Importance of using the media to present research
Interactions
Importance of using contacts, seminar and reports to
present research to policy-makers and practitioners
Importance of using contacts, seminar and reports to
present research to private sector organisations
Number Cronbach
of items
alpha
in a scale
6
0.91
4
2
5
10
5
0.72
0.30
0.82
0.93
0.76
10
5
0.89
0.85
7
4
3
0.78
0.78
0.69
7
0.94
4
0.95
3
0.69
6
0.88
3
0.89
25
Table 4. Means and standard deviationsa academic research
Range
Research Utilisation Ladder
Researchers’ Context
Quantitative approach (Adopt quantitative approaches)
Quantitative approach (Seldom use quantitative approaches)
Qualitative approach (Adopt qualitative approaches)
Qualitative approach (Seldom use qualitative approaches)
Research targeted to user
Importance of academic funding
Importance of other funding
Benefits of collaborative research
Barriers academics experience in transfer & uptake of research
Consequences of investing in research partnerships
Problems relating to the orientation of research partnerships
Research Time (teaching and research position)
Research Time (research only positions)
Number of external partnerships
User’s Context
End-users prioritise high quality research
End-users prioritise the useability of the research
End-users prioritise the feasibility of the research
Dissemination
Importance of tailoring research when end-users are the focus
Importance of meetings & dissemination activities with endusers
Importance of using the media to present research
Importance of using refereed publications to present research
Interactions
Importance of using contacts, seminar and reports to present
research to policy-makers and practitioners
Importance of using contacts, seminar and reports to present
research to private sector organisations
Number of external grants
Discipline (Education)
Discipline (Economics)
Discipline (Sociology)
Discipline (Political Science)
Discipline (Psychology)
Discipline (Other disciplines)
Home Institution (Affiliation with Go8 University)
Home Institution (Affiliation with non Go8 University)
Position type (Level D or E)
Position type (Levels A-C)
a.
Std.
Err.
0.03
Min
Max
Mean
1
5
3.49
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
1
1
1
4
5
5
5
5
5
5
1
1
184
0.53
0.47
0.60
0.40
2.26
3.84
2.81
3.17
3.85
3.49
2.57
0.65
0.35
8.77
1
1
1
5
5
5
3.72
4.60
3.83
0.02
0.02
0.03
0
0
5
5
3.85
3.77
0.04
0.04
0
0
5
5
3.17
4.60
0.04
0.03
0
5
3.51
0.04
0
5
2.66
0.05
0
0
0
0
0
0
0
0
0
0
0
67
1
1
1
1
1
1
1
1
1
1
8.55
0.23
0.15
0.13
0.11
0.15
0.23
0.51
0.49
0.72
0.28
0.32
0.02
0.03
0.04
0.04
0.03
0.03
0.03
0.49
Standard deviations only reported for continuous measures.
26
Table 5. Multiple linear regression equations predicting utilisation of academic research
β
SE
Researchers’ Context
Adopt quantitative approaches
0.11**
(0.05)
Adopt qualitative approaches
-0.01
(0.06)
***
Research targeted to user
0.16
(0.05)
Importance of academic funding
-0.09***
(0.03)
***
Importance of other funding
-0.09
(0.03)
Benefits of collaborative research
0.13***
(0.03)
Barriers academics experience in transfer & uptake of research
-0.00
(0.04)
Consequences of investing in research partnerships
-0.01
(0.05)
Problems relating to the orientation of research partnerships
-0.03
(0.04)
Teaching & research position
-0.05
(0.05)
Number of external partnerships
0.00
(0.00)
**
0.11
(0.05)
User’s Context
End-users prioritise high quality research
0.15***
(0.04)
**
End-users prioritise the useability of the research
0.14
(0.06)
End-users prioritise the feasibility of the research
-0.11***
(0.03)
Dissemination
Importance of tailoring research when end-users are the focus
0.07**
(0.03)
Importance of meetings & dissemination activities with end-users
0.08**
(0.03)
*
Importance of using the media to present research
0.05
(0.03)
Importance of using refereed publications to present research
-0.01
(0.03)
Interactions
Importance of using contacts, seminar and reports to present
0.08**
(0.04)
research to policy-makers and practitioners
Importance of using contacts, seminar and reports to present
-0.02
(0.02)
research to private sector organisations
Number of external grants
0.01**
(0.00)
Education
0.06
(0.07)
Economics
-0.11
(0.08)
Sociology
0.06
(0.08)
Political Science
-0.12
(0.08)
Psychology
-0.02
(0.08)
Affiliation with Go8 University
-0.03
(0.05)
Level D or E
0.06
(0.05)
***
Constant
1.58
(0.31)
Observations
666
Adjusted R2
0.41
Standard errors in parentheses
*
p < 0.10, ** p < 0.05, *** p < 0.01
27
Download