Measuring Transfer of Training-Employing the Levels of

advertisement
Title: Measuring Transfer of Training-Employing the Levels of Use Inventory
Name of author(s)/Organisation affiliation/position(s):
Marijke Thamm Kehrhahn, Associate Professor, University of Connecticut,
USA
Alexandra A. Bell, Associate Professor, University of Connecticut, USA
Address:
Department of Educational Leadership
249 Glenbrook Rd Unit 3093
University of Connecticut
Storrs, CT 06269 USA
Corresponding Author Email address: Marijke.Kehrhahn@uconn.edu
Stream:
Assessment, measurement, and evaluation of HRD
Submission type: Working paper
1
Abstract
Purpose: HRD practitioners need valid and reliable methods to assess learners’ transfer of
training, while researchers need transfer measures to generate accurate models of the transfer
process. This paper critiques current transfer research, and describes the Levels of Use (LoU)
Inventory as a viable measure of transfer for researchers and practitioners.
Method: By examining analytical reviews of transfer research conducted between 1992-2008
we identified transfer measurement issues and recommendations. We critiqued the ways and
extent to which research published since 2008 has addressed the recommendations.
Findings: We identified four transfer measurement issues. Current researchers have not
addressed these issues adequately. Contrary to recommendations, many researchers continue
to conceptualize transfer as an outcome of training and gather transfer data at one point in
time, from one source, in one way. One measure, the LoU Inventory, shows promise in
conceptualizing and assessing transfer in ways consistent with recommendations.
Implications for Research: Research is needed to establish validity and reliability of
different LoU Inventory formats, and to assess how the Inventory can promote learner
metacognitive knowledge and self-regulation in transfer.
Implications for Practice: The Inventory can provide HRD professionals, supervisors,
learners, and peers with rich information about the nuances of transfer over time at both
individual and group levels.
Significance: HRD practitioners have very few options for transfer measures that enable
gathering data from multiple stakeholders, provide reliable data on transfer efforts, utilize
employee time economically, and effectively inform transfer support efforts. The LoU
Inventory has the potential to fill this gap.
Keywords: Transfer of Training, Assessment
2
Measuring Transfer of Training: Employing the Levels of Use Inventory
Transfer of training, the application of newly acquired knowledge and skills to the job
in ways that enhance work performance, remains a central measure for evaluating the
effectiveness of HRD. Because HRD practitioners focus the vast majority of workplace
training and development activities on improving employee and organizational performance,
they need valid and reliable methods to assess learning transfer. To support practitioner
efforts, researchers need meaningful measures of transfer that generate accurate models of the
transfer process and its relationship to work performance. In this paper we identify and
critique transfer of training measures currently used by researchers and describe in detail the
Levels of Use Inventory as a viable measure of transfer for use by both researchers and
practitioners.
Problem Statement
In several current reviews of transfer of training research (e.g., Blume et al, 2010;
Burke & Hutchins, 2007; Salas et al, 2012), researchers have identified the need for further
development of transfer measures. Blume et al (2010), in their meta-analysis of transfer
research, noted that researchers continue to operationalize and measure transfer in a variety
of ways. Citing literature reviews by Baldwin and Ford (1988) and Ford and Weissbein
(1997), the authors acknowledged improvements in transfer research design, but also noted
continued need to refine transfer measures. To add to the challenge of assessing transfer,
some studies of learning transfer systems focused on relationships among variables purported
to influence transfer, but did not include measures of transfer itself (Burke & Hutchins, 2007).
Although researchers and practitioners recognize transfer as a multidimensional and
dynamic process, many continue to assess transfer as a one-time dichotomous (transfer/no
transfer) event. In conclusion to their reviews of transfer research, Salas et al (2012)
3
suggested multiple measures of training-related changes in knowledge and performance to
capture the influences of after-training variables on transfer, while Burke and Hutchins
(2007) recommended a shift toward capturing a variety of transfer indicators. Watkins, Lysø
and deMarrais (2011) noted the challenge of capturing transfer when training is focused on
general development of open-ended skills, such as leadership development, and
recommended a critical incident interview approach to provide more detailed data on how
participants’ post-training behavior is influenced by participation in training. Overall, HRD
researchers agree that much can be done to improve measurement of training transfer.
In addition to these issues, other weaknesses in transfer measurement exist. Most
notably, transfer measures may provide a snapshot of the extent to which employees are
using a new skill, but rarely provide information on the practices embedded in the transfer
process or the ways in which employees engage in the process of integrating new knowledge
and practices into their work. Overall, HRD practitioners have very few options for transfer
measures that enable gathering data from multiple stakeholders in the transfer process,
provide reliable data on transfer efforts, utilize employee time economically, and effectively
inform transfer support efforts.
Measurements of Transfer of Training: Current Literature
A number of analytical reviews of transfer research were published between 2007 and
2011 (Aguinis & Kraiger, 2009; Blume et al, 2010; Burke & Hutchins, 2007; Gegenfurtner,
2011; Grossman & Salas, 2011), and Educational Psychologist and Educational Research
Review published special issues on transfer of training in 2012 and 2013. Authors in these
reviews and special issues analyzed transfer research (1992-2008), provided critiques of
transfer measures (Blume, et al, 2010; Gegenfurtner, 2011), and made recommendations for
transfer research going forward (e.g., Grossman & Salas, 2011; deGrip & Sauermann, 2012;
Volet, 2013). A number of reviewers stated that future research requires a more explicit
4
discussion and focus on transfer measures, while others recommended a change in direction
to focus research more on illuminating the transfer process.
Integrative Critique of Transfer Measures
Through a review of the literature, we surfaced four transfer measurement issues.
First, generally, researchers have measured transfer in terms of newly acquired knowledge,
skills, and attitudes (KSAs), frequency of use, or the perceived effectiveness of using new
KSAs (Blume et al, 2010; Gegenfurtner, 2011); however, they often described transfer as
some variation of a “transfer/no transfer” or “high transfer, low transfer” dichotomy. These
measures and categorizations offer no insight into the actual process of transfer.
Second, in the vast majority of studies researchers measure transfer once, following
completion of the training—a method inconsistent with the understanding that sustaining the
use of a new skill over time is a critical transfer condition. Blume et al (2010) found only 6 of
93 studies reviewed in which a transfer measure was taken more than once. The single
measure can capture a transfer “snapshot,” but cannot account for transfer initiation,
persistence, or maintenance that may occur outside the timeframe of the single point of
measure.
Third, transfer research is predominantly focused on identifying various systems
variables associated with transfer. Individuals in these systems-focused studies are depicted
as elements in a system that can be influenced by manipulating other elements in the system
to elicit specific transfer outcomes, with little attention to individual self-determination or
agency (Lobato, 2013). We found few studies that explicate the ways in which individuals
participate in the cognitive, behavioral, and metacognitive activities used to adapt learning to
action in the workplace.
Lastly, researchers frequently measure transfer as an outcome variable to measure
training effectiveness; we found few studies that used measures to illuminate the process of
5
transfer. This approach to transfer measurement leaves scholars with information about
whether or not individuals transferred the training, but with little insight into how transfer
occurred. As Baartmann and deBruijn (2013) suggested, “the learning processes toward
integration of KSAs largely remain a black box” (p.126). In summary, the majority of
researchers continue to measure transfer of training as a one-point-in-time outcome measure
of training effectiveness that provides little insight into the individual transfer process.
Trends in 2008-2015 Transfer Studies
Because the analytical reviews discussed above examined published research from
1992 to 2008, we reviewed studies published between 2009 and 2015, and examined
specifically the degree to which they replicated prior transfer measurement approaches or
implemented recommendations for advancing transfer measurement provided in the
analytical reviews.
We located and reviewed 20 studies of transfer of training conducted in actual
workplaces and published in English between 2009 and 2015. (See bold font entries in
References list.) The studies represent the work of researchers internationally. This body of
research reflects many of the same conceptual and methodological approaches used by
researchers prior to 2009. Progress in implementing recommendations for future research
offered by authors in analytical reviews has been slow. For example, among the 20 transfer
studies published since 2009, only 8 studies gathered transfer data from more than one source
and 2 studies used more than one measure of transfer—consistent with recommendations.
Although the frequency of use of newly acquired skills continues to be the predominant unit
of transfer measurement (8 studies), five studies reported data on the effectiveness of using
the new skills, and six studies used both types of measures.
Researchers have made modest progress in the area of extending the time frame for
transfer assessment, recognizing that transfer involves maintenance as well as initiation.
6
Among research conducted over the past 7 years, five studies focused on initiation measures
of transfer, assessing transfer immediately following the training or within the first 4 weeks,
while the large majority of studies (17) measured transfer after some time had passed (1
month to 1 year). Because so little is known about the transfer of training process, the point at
which initiation becomes maintenance is unclear.
Unfortunately, current researchers have not implemented many of the
recommendations for transfer research offered in the comprehensive analytical reviews.
Overwhelmingly, researchers continue to gather transfer data at only one point in time (18
studies). In the two studies where transfer data were collected at multiple points, Lau and
McLean (2013) used the same survey at 1 month, 6 months, and 1 year following training in
Malaysia, and Canadian researchers Taylor et al (2009) used a case study approach to gather
transfer data from multiple sources over several months. Researchers continue to
conceptualize transfer as a measure of training effectiveness (10 studies), and to use transfer
data to create a systems view of variables associated with transfer (10 studies).
We located three studies published between 2009 and 2015 that utilized transfer
measures designed in response to ongoing efforts to improve transfer research. A study of the
effectiveness of diversity training for university research assistants in the U.S. by Roberson et
al (2009) required participants to develop transfer plans and gathered data 4 weeks after
training completion to determine the extent to which participants were using the transfer
strategies they designed. Although the results do not provide details about participants’
experiences of implementing transfer, the conceptual framework highlights the importance of
planning, monitoring, and evaluating the transfer process, in addition to the application of
newly acquired KSAs.
Watkins et al (2011) used a more dynamic and developmental approach to training
evaluation through the use of critical incident interviews with participants, peers,
7
subordinates, and supervisors to identify individual and organizational change associated
with participation in leadership development programs in the U.S and Norway. The resulting
case studies provided dynamic illustrations of ways participants applied and adapted
leadership concepts to their practice over time, and insights into how participants engaged
with others to translate what they had learned into appropriate action.
Lastly, Taylor et al (2009) conducted interviews and focus groups with program
participants, instructors, and workplace supervisors and generated field notes to develop
multi-site case studies to uncover characteristics of the transfer process of low-literacy adults
participating in an employment preparation program in Canada that included classroom
instruction, on-the-job internships, and employment. The researchers concluded that transfer
of learning efforts and success were linked to individual perceptions of the extent to which
skills learned could be useful across multiple life roles and the degree to which skills learned
were essential to work and life success. Participant efforts to transfer were linked also to
program instructors’ understanding that learning would happen over time and that
participants’ development of self-regulated learning strategies were essential for successful
transfer. Overall, Taylor et al provided an in-depth view of a learning transfer system over
time, with an emphasis on the experiences of the learners.
Recommendations for Future Transfer Research
Scholars currently engaged with analyzing transfer research make several
recommendations for improving transfer research. Grossman and Salas (2011) called for
more in-depth research that would provide the next layer of understanding of the transfer
phenomenon, while Blume et al (2010) identified the need for a focus on how different forms
and types of transfer measurement contribute to overall understanding of transfer. Burke and
Hutchins (2007) suggested that future research should “assess training transfer as a
multidimensional phenomenon with multilevel influences” (p.287), taking into account the
8
role of individual meta-cognition and self-regulation. Volet (2013) provided a number of
strategies for improving transfer measurement including determining what KSAs transfer,
how, when, and under what conditions, and examining person-environment dynamics in
transfer scenarios. Several researchers (e.g., Blume et al, 2010; Gegenfurtner, 2011; Volet,
2013) recommended using multiple data collection strategies and sources to triangulate
research findings. The challenge appears to be designing measures to capture transfer efforts
and outcomes over time without fatiguing participants while supporting strong response rates,
particularly in actual workplaces (Burke & Hutchins, 2007; deGrip & Sauermann, 2012;
Volet 2013). Optimally, measures of transfer provide information that can inform those
accountable for transfer—learners, managers, and HRD practitioners—about the design and
effectiveness of transfer interventions and supports (Aguinis & Kraiger, 2009; Grossman &
Salas, 2011) and inform learners themselves about their transfer processes and outcomes.
Editors of recent special issues focused on transfer of training suggested that
researchers consider new perspectives and models for understanding of transfer; one oftrepeated recommendation was to examine the transfer process and the individual’s
engagement in transfer in more depth. Current transfer research fails to illuminate what
actually happens in the transfer process that results in improved performance; survey studies
provide generalized inputs/outputs data and performance outcomes measures can be used as
indicators of training effectiveness, but neither give a glimpse into the “black box” (deGrip &
Sauermann, 2012, p.29).
Recent work of Billett (2013), Perkins and Salomon (2012), and others highlight the
importance of building an evidence-based understanding of cognitive, meta-cognitive, and
socio-cognitive engagement in the transfer process, aside from motivational, supervisory,
peer, training, and environmental influences. Researchers studying transfer in work settings
conclude that self-regulation and metacognitive knowledge are essential elements in
9
successful transfer, particularly in the absence of favorable transfer environments (e.g., Enos,
Kehrhahn & Bell, 2003).
Based on our extensive review of the literature, we propose that transfer of training be
more broadly researched; not only as the successful application of newly acquired knowledge
and skills, but also as the process through which employees plan, initiate, implement, and
adapt the knowledge and skills to their work. The following section of the paper provides
detailed information on a valid method to measure both.
Levels of Use Inventory
The Levels of Use (LoU) framework (Hall & Loucks, 1977; Hall & Hord, 2011) is
part of a larger learning and change model called the Concerns-Based Adoption Model
(CBAM). The CBAM model was initially developed to assist school leaders in supporting
educators’ use of innovative instructional methods following their participation in a
professional development program. Based on the premise that training does not automatically
lead to high-fidelity implementation of newly acquired knowledge and skills, the CBAM
model includes three essential assumptions. First, initiation and integration of new practices
into a pre-existing complex set of work behaviors is a process and not an event; movement in
the process can be captured as Levels of Use (LoU) of the new practices. Second, progress in
the transfer process depends on addressing employee concerns about the impact of transfer
efforts on their personal work life, concerns about how to use the skills, and concerns about
impact on organizational outcomes. Hall, George, and Rutherford (1977) called this part of
the model, Stages of Concern. And third, newly acquired knowledge and skills are adapted
and configured to best fit the local context, therefore transferred skills in practice may look
very different from one another and very different from what training program developers
intended. In their initial research (n = 800), Hall and Loucks (1977) found that no two
10
individuals were using the same form of the innovation, nor did they agree on operational
definitions. In this review, we focus on the Levels of Use element of the CBAM model.
The Levels of Use framework offers a view of transfer as a process, not an event. Hall
and Loucks (1978) described the transfer process as cumulative, uneven, gradual, and
complex and warned that single measures of transfer can miss the phenomenon altogether,
leading to under-estimation of training effectiveness. The LoU framework presents
implementation of new knowledge and skills as a result of a series of individual decisions
that help move the employee-learner from early stages of planning to transfer, through
mechanical integration of new skills into a pre-existing work repertoire, to routine
implementation, adaptation, and refinement. Specifically, the LoU Inventory (Hall & Hord,
2011) provides a set of behavioral profiles that distinguish different levels of transfer,
including three non-transfer profiles and five transfer profiles (see Table).
Table
Levels of Use Inventory
Categories of
Levels of Use
0
Non-use
Descriptions of Levels of Use Categories
The learner has little or no knowledge of the
innovation*, no involvement with the innovation, and is
doing nothing to become involved.
Decision Point
Decides to take action to learn more about the
innovation.
Non-Transfer
I
Orientation
The learner has acquired or is acquiring information
about the innovation and/or has explored or is exploring
its value orientation and its demands upon learner and
learner system.
Decision Point
Decides to use the innovation by establishing a time to
begin.
II
The learner is preparing for first use of the innovation.
11
Preparation
Decision Point
Decides to go ahead with implementation with
perception that personal needs/concerns have been/will
be addressed.
III
Mechanical
Use
The learner focuses most effort on the short-term, dayto-day use of the innovation with little time for
reflection. Changes in use are made more to meet
learner needs than client needs. The learner is primarily
engaged in a stepwise attempt to master the tasks
required to use the innovation, often resulting in
disjointed and superficial use.
Decision Point
Decides that innovation should become part of routine
work practices.
IV A
Routine
Use
Use of the innovation is stabilized. Few if any changes
are being made in ongoing use. Little preparation or
thought is being given to improving innovation use or its
consequences.
Decision Point
Decides to modify the innovation to achieve better client
outcomes.
IV B
Refinement
The learner varies the use of the innovation to increase
the impact on clients within immediate sphere of
influence. Variations are based on knowledge of both
Transfer
short- and long-term consequences for clients.
Decision Point
Decides to modify innovation based on input of and
coordination with colleagues.
V
Integration
The learner is combining own efforts to use the
innovation with related activities of colleagues to
achieve a collective impact on clients within their
common sphere of influence.
Decision Point
Decides to explore alternatives or major modifications
of the innovation to substantially elevate outcomes.
VI
Renewal
The learner reevaluates the quality of use of the
innovation, seeks major modifications or alterations to
12
present innovation to achieve increased impact on
clients, examines new developments in the field, and
explores new goals for self and the system.
Note: Adapted from G. E. Hall and S. F. Loucks (1977). A developmental model for
determining whether the treatment is actually implemented. American Education Research
Journal, 14 (3), 263-276.
*Hall and Loucks (1978) defined innovation as a practice that is perceived as new to the
individual and that is most often learned about through participation in formal training.
As shown in the table, transition from one Level of Use to the next depends on the
learner making a decision to move forward with transfer. For example, an employee at Level
0 (Non-Use) makes a decision to learn more about the new skills, perhaps by registering for
training or by discussing with colleagues, moving herself to Level I (Orientation). Likewise,
an employee at Level III (Mechanical Use) decides to persist with transfer efforts beyond
initiation, making a commitment to permanently change his practice, and moves to Level
IVA (Routine Use). According to Hall and Hord (2011), while the decision making process is
individual, HRD practitioners and supervisors, supplied with knowledge of current Level of
Use and Stages of Concern, can help employees move forward with the transfer process by
addressing concerns, encouraging goal setting, and facilitating decision making.
Administration of the LoU Inventory
Hall and colleagues developed the Levels of Use Inventory as a 30-minute interview
protocol with the learner conducted by a trained administrator of the tool. The administrator
codes interviewee responses using a framework that delineates behavioral elements at each
level and places the interviewee at a specific Level of Use (Hall & Hord, 2011). Inter-rater
reliability of 1381 cases was .87 to .96, based on agreement on assigned level of use by two
coders listening to recorded interviews. A validity study was conducted comparing
individuals’ (n = 45) interview scores with ethnographer/observers scores based on one full
13
day of observation (r = .98) (Hall & Loucks, 1977). Further, Hall and Loucks (1978) reported
substantial variation across the eight levels with data collected 2-3 years after introduction of
the innovation (0 = 7%, I = 9%, II = 3%, III = 19%, IVA = 52%, IVB = 6 %, V = 3%, VI =
2%), demonstrating the Inventory’s usefulness in detecting variation in transfer efforts among
learners. Other study samples were similar in their distributions, with the largest percentage
of users consistently at Level III (Mechanical Implementation) and Level IVA (Routine
Implementation). Across studies, LoU researchers found that novice professionals tend to
stay at the Mechanical level of implementation for extended periods of time and that
individuals are most likely to abandon transfer efforts at this stage (Hall & Hord, 2011)
The education community continues to maintain high interest in the CBAM model 40
years after its initial development. The CBAM principles are central elements of the U.S.
standards for educator professional development, revised in 2011 (Learning Forward, 2015).
Hall (2013), in a Legacy Paper published by the Journal of Education Administration,
highlighted the continued relevance of the LoU as a tool for HRD practitioners and
administrators supporting individual transfer efforts. He identified a gap in the research that
calls for longitudinal studies of transfer to provide a better in-depth understanding of
individual processes of change associated with learning and implementing new knowledge
and skills.
LoU as a Transfer Measure
In practice, administration of the LoU Inventory involves either a “branching
interview” or a more formalized “focused interview” (Hall & Hord, 2011) to obtain a detailed
description of an individual’s level of use of an innovation or “innovation bundle” across
seven different dimensions: Knowledge, Acquiring Information, Sharing, Assessing,
Planning, Status Reporting, and Performing. Researchers using this method frequently
include observation and review of documents to corroborate interview findings, as well as
14
methods to establish reliability and internal validity of LoU assessments. Repeating the
interview overtime among many learners in an organization affords a nuanced assessment of
changes in use of innovations at an individual and system-wide level. The majority of
researchers using this method have assessed LoU among faculty in either school settings (e.g.,
Hollingshead, 2009; Kong & Shi, 2009, Tunks & Weller, 2009) or higher education settings
(e.g., Folger & Williams, 2009; Hodges, 2014)
Other researchers have conducted either branching or formalized interviews, with or
without corroborating data and validation efforts, focused on the performance dimension of
use to obtain an overall profile of an individual’s LoU. While this method has the advantage
of being less time consuming than the comprehensive method, it provides a less detailed
assessment of ways in which learners use different dimensions of an innovation and transfer
different aspects of training. This method is common also in studies conducted in school (e.g.,
Saylor, 1998; Rout et al, 2010; Wang, 2014) and higher education settings (Olafson et al,
2005).
A study by Saylor (1998), in which the LoU Inventory interview was modified to a
written open-ended format, demonstrated that learners are able to self-assess their LoU with
the same level of accuracy as expert evaluators. In her study of 68 middle school teachers
who completed training in educational technologies at the beginning of the school year,
participants were given a week to respond on their own to a written version of the branching
interview near the end of the school year. Teachers’ assessments were then evaluated by the
researcher and two expert reviewers, and corroborated by a district technology expert’s rating
of each teacher’s proficiency using technology at the end of the school year. The teachers’
self-ratings and evaluators’ classifications as users or nonusers were perfectly consistent.
Although Hall and Hord (2011) state, “it is not possible to measure LoU with
questionnaires and online surveys” (p. 287), many researchers have used quantitative
15
methods to assess LoU. These efforts reflect researchers’ appreciation for the significance of
the LoU construct, mitigated by methodological constraints, such as restricted access to
learners, large sample size, limited funding, and a desire to use statistical approaches to
explore multivariate relationships. The body of studies in which researchers have quantified
LoU is a testament to the English proverb, “Necessity is the mother of invention.” Most
researchers (e.g., Fitzgerald, 2002; LaRocco & Wilken, 2013; Myers, 2009; Weber, 2013)
quantified LoU using an 8-point ordinal scale, with one value on the scale for each of the
eight levels of use. Unfortunately, very few researchers (e.g., Roberts et al, 1997) reported
using methods to assess the reliability and internal validity of responses using these scales.
Saylor (1998) highlighted how quantifying LoU can illuminate trends and
relationships that qualitative methods cannot. Saylor used a discriminate function analysis to
predict variance in Use/Nonuse of technology among middle school teachers based on
individual and environmental support variables 5 months after completing training. Four
factors (teacher efficacy, social support, motivation to transfer, and age) explained 29% of
the variance in Use/Nonuse, and the model classified 87% of participants correctly.
Given the efficiency in assessing LoU quantitatively, researchers’ use of this
approach to assess transfer in settings outside of education it is not surprising. Fitzgerald
(2002) assessed transfer of training and transfer climate factors among 33 direct service staff
at a U.S. state mental health organization engaged in training on ethical decision-making. At
4-months post training, the LoU change scores provided a detailed profile of significant
increases in transfer among members of the intervention group, a reflection of procedural
knowledge gains, whereas knowledge gain scores did not significantly increase. Similar
findings indicating the LoU was a more sensitive assessment of changes in transfer behaviors
than declarative knowledge scores was demonstrated by Myers (2009) in a study of 53
16
personnel in a U.S. heath care organization participating in training on managing a
harassment free workplace.
Researchers using either qualitative or quantitative assessment of LoU consistently
demonstrate that the LoU framework is sensitive in describing variability in use across
learners who participate in the same training (e.g., Folger & Williams, 2009; LaRocco &
Wilken, 2013; Olafson et al, 2005; Rout et al. 2010), and in describing changes in use over
time (e.g., Hodges, 2014; Kong & Shi, 2009; Olafson et al, 2005; Tunks & Weller, 2009). In
many studies, HRD administrators or school leaders used LoU outcomes to inform training
design and interventions for individuals or groups. However, the LoU framework shows great
promise as a resource for learners to directly plan, monitor, and self-assess their own learning,
and for development of professional learning communities. In an innovative application of
the LoU framework, Orr and Mrazek (2009) developed an online survey whereby graduate
students enrolled in an educational technology course assessed their level of adoption for 20
different educational/instructional technologies. Learners completed the survey three times—
before the semester, semester’s end, and 4-months after semester’s end. Individual and
aggregate data were available to all learners and various visual displays portrayed individual
and group adoption patterns. Learners used the data to promote reflection on use of
technologies, personalize learning goals, plan learning, and self-assess learning processes and
outcomes. The data also became a focal point for establishing a supportive community of
learners.
Our review of studies using the LoU Inventory indicates that as a measure of transfer
it has the potential to addresses many of the recommendations for transfer research identified
in recent analytical reviews. When administered via interview and customized to a specific
innovation configuration (Hall & Hord, 2011), the LoU provides a detailed profile of
individual transfer behaviors across multiple dimensions of use, including knowledge,
17
planning, assessment, and performance. When repeated over time, it provides a nuanced
description of how individuals change behaviors. In tandem with assessment of
environmental factors, it contributes to understanding the person-environment dynamics in a
transfer scenario. The LoU also shows promise as a means to assess and support learner
metacognitive knowledge and self-regulation by providing feedback about transfer efforts
and outcomes and serving as a guide for planning learning.
Implications for Future Research with the LoU Inventory
The affordances of the LoU Inventory as an assessment of transfer make it a viable
tool for researchers engaged in efforts to enhance HRD practice through scholarly
examination in field settings. Based on our critique of studies using the LoU, research efforts
targeting the following questions will expand its utility as a viable measure of transfer
processes and outcomes over time, and contribute to evidence-based practice by HRD
professionals:

How do level of use outcomes compare across different LoU formats (e.g., branching
interview, focused interview, quantitative scale survey, and self-administered openended questions)? What are the psychometric advantages and disadvantages of each
format for researchers and practitioners?

How can multiple stakeholders (e.g., learners, peers, supervisors, and HRD
evaluators) assess levels of use in survey format? How can inter-rater reliability
among multiple stakeholders be established?

How does the LoU Inventory promote learner metacognitive knowledge and selfregulation in transfer? How can the Inventory promote professional learning among
peers?
18

What cultural and social factors need to be considered in using the LoU Inventory?
What is the relevance and utility of the Inventory as a research instrument and tool for
HRD practitioners internationally?
Implications for Practice
The Levels of Use framework provides an actionable conceptualization of transfer of
training, and the instrument provides relevant data practitioners can use to measure and
support transfer. Our work with the LoU Inventory has shown that the concepts resonate with
employees, and particularly with supervisors, managers, and HRD professionals and can be
used productively in work settings. Specifically, we recommend the following applications:

Present the LoU framework to learners as part of a training program to support
transfer planning and implementation.

Include a module on the LoU framework in supervisory/management training to build
supervisors’ understandings of and capacity for supporting transfer.

Include a module on the LoU framework in HRD professional development and
degree programs to build comprehensive understandings and skills for designing,
measuring, and supporting transfer.
Because of the time-consuming nature of data collection, we do not recommend using
the LoU Inventory interview in each and every workplace training scenario. We believe,
however, that it has value for use by HRD practitioners and managers in the following
ways:

Use the LoU Inventory interview as tool to support HRD practitioners and
managers to become more familiar with the process, variations, and viewpoints on
transfer in their settings.
19

Use the LoU Inventory interview as an action research tool to illuminate transfer
efforts, provide insight into the design of interventions to enhance transfer, and to
gather follow-along data as interventions are implemented.

Use the LoU Inventory interview to measure transfer progress in circumstances in
which transfer is paramount.
20
References
Aguinis, H. & Kraiger, K. (2009) Benefits of training and development for individuals and
teams, organizations, and society. Annual Review in Psychology, 60 (1), 451-474.
Baartman, L. K. J. & de Bruijn. E. (2011) Integrating knowledge, skills and attitudes:
conceptualising learning processes towards vocational competence. Educational Research
Review, 6 (2), 125–134
Baldwin, T. & Ford, J. (1988) Transfer of training: A review and directions for future
research. Personnel Psychology, 41 (1), 63-105.
Billett, S. (2013). Recasting transfer as a socio-personal process of adaptable learning.
Educational Research Review, 8, 5–13.
Blume, B., Ford, J., Baldwin, T. & Huang, J. (2009) Transfer of training: a meta-analytic
review. Journal of Management, 36 (4), 1065-1105.
Brennan, P. C., Madhavan, P., Gonzalez, C. & Lacson, F. C. (2009) The impact of
performance incentives during training on transfer of learning. In Proceedings of the Human
Factors and Ergonomics Society Annual Meeting, 53 (26), 1979-1983.
Brown, T. C. & Warren, A. W. (2014) Evaluation of transfer of training in a sample of
union and management participants: a comparison of two self-management techniques.
Human Resources Development International, 17 (3), 277-297.
Brown, T. C., McCracken, M. & Hillier, T. (2013) Using evidence-based practices to
enhance transfer of training: assessing the effectiveness of goal-setting and behavioral
observation scales. Human Resources Development International, 16 (4), 374-389.
Brown, T., McCracken, M. and O'Kane, P. (2011) ‘Don't forget to write’: how reflective
learning journals can help to facilitate, assess and evaluate training transfer. Human
Resource Development International, 14 (4), 465-481.
Burke, L. & Hutchins, H. (2007) Training transfer: an integrative literature review. Human
Resource Development Review, 6 (3), 263-296.
Chen, G., Thomas, B. & Wallace, J. (2005) A Multilevel examination of the relationships
among training outcomes, mediating regulatory processes, and adaptive performance.
Journal of Applied Psychology, 90 (5), 827-841.
Davidson, M. L. (2014) A criminal justice system-wide response to mental illness:
evaluating the effectiveness of the Memphis Crisis Intervention Team training
curriculum among law enforcement and correctional officers. To be published in
Criminal Justice Policy Review. [Preprint] Available from:
http://cjp.sagepub.com/content/early/2014/10/23/0887403414554997.full.pdf+html
[Accessed: 3rd February 2015].
De Grip, A. & Sauermann, J. (2013) The effect of training on productivity: the transfer of onthe-job training from the perspective of economics. Educational Research Review, 8, 28-36.
21
Enos, M., Kehrhahn, M. & Bell, A. (2003) Informal learning and the transfer of learning:
how managers develop proficiency. Human Resource Development Quarterly, 14 (4), 369387.
Fitzgerald, C. G. (2002) Transfer of training and transfer climate: The relationship to the use
of transfer maintenance strategies in an autonomous job context. Ph.D Thesis, University of
Connecticut, USA.
Folger, T. S. & Williams, M. K. (2009) Filling the gap with technology innovations:
Standards, curriculum, collaboration, success. Journal of Computing in Teacher Education,
23 (3), 107-115.
Ford, J. & Weissbein, D. (1997) Transfer of training: an updated review and analysis.
Performance Improvement Quarterly, 10 (2), 22-41.
Frash, R., Antun, J., Kline, S. & Almanza, B. (2010) Like It! Learn It! Use It?: a field
study of hotel training. Cornell Hospitality Quarterly, 51 (3), 398-414.
Gegenfurtner, A. (2011) Motivation and transfer in professional training: a meta-analysis of
the moderating effects of knowledge type, instruction, and assessment conditions.
Educational Research Review, 6 (3), 153-168.
Grossman, R. & Salas, E. (2011) The transfer of training: what really matters. International
Journal of Training and Development, 15 (2), 103-120.
Hall, G. E., George, A. A. & Rutherford, W. L. (1979) Measuring Stages of Concern about
the innovation: a manual for use of the SoC Questionnaire (Report No. 3032). Austin, Texas,
The Research and Development Center for Teacher Education, University of Texas.
Hall, G. E. & Hord, S. M. (2011) Implementing change: patterns, principles, and potholes.
3rd ed. Upper Saddle River, NJ, Pearson.
Hall, G. E. & Loucks, S. F. (1977) A developmental model for determining whether the
treatment is actually implemented. American Educational Research Journal, 14 (3), 263-276.
Hall, G. & Loucks, S. (1978) Innovation configurations: analyzing the adaptations of
innovations. Austin, Texas, The Research and Development Center for Teacher Education,
University of Texas.
Hodges, J. M. O. (2014) An examination of Stage of Concern, Levels of Use, and tutorials on
faculty members’ implementation of a learning management platform. Ph.D Thesis,
University of Southern Alabama, USA.
Hollingshead, B. (2009) The concerns-based adoption model: A framework for examining
implementation of a character education program. NASPA Bulletin, 9 (3), 166-183.
Hanover, J. & Cellar, D. (1998) Environmental factors and the effectiveness of workforce
diversity training. Human Resource Development Quarterly, 9 (2), 105-124.
22
Kazbour, R., McGee, H., Mooney, T., Masica, L. & Brinkerhoff, R. (2013) Evaluating
the Impact of a Performance-Based Methodology on Transfer of Training. Performance
Improvement Quarterly, 26 (1), 5-33.
Kong, F. & Shi, N. (2009) Process analysis and level measurement of textbook use by
teachers. Frontiers in Education In China, 4 (2), 268-285.
Kupritz, V. & Hillsman, T. (2011) The impact of the physical environment on
supervisory communication skills transfer. Journal of Business Communication, 48 (2),
148-185.
Ladyshewsky, R. & Flavell, H. (2011) Transfer of training in an academic leadership
development program for program coordinators. Educational Management
Administration & Leadership, 40 (1), 127-147.
LaRocco, D. J., & Wilken, D. S. (2013) Universal design for learning: university faculty
Stages of Concern and Levels of Use—A faculty action-research project. Current Issues in
Education, 16 (1). Available from: http://cie.asu.edu/ojs/index.php/cieatasu/article/view/1132
[Accessed 2nd February 2015].
Lau, P. & McLean, G. (2013) Factors influencing perceived learning transfer of an
outdoor management development programme in Malaysia. Human Resource
Development International, 16 (2), 186-204.
Learning Forward. (2015) Standards for professional learning. [Online] Available from:
http://learningforward.org/standards-for-professional-learning [Accessed 20th February
2015]
Lobato, J. (2012) The actor-oriented transfer perspective and its contributions to educational
research and practice. Educational Psychologist, 47 (3), 232-247.
Millar, P. & Stevens, J. (2012) Management training and national sport organization
managers: examining the impact of training on individual and organizational
performances. Sport Management Review, 15 (3), 288-303.
Myers, M. J. M. (2009) Transfer of learning rom training program to the workplace in a
university healthcare organization setting. Ph.D Thesis, University of Connecticut, USA.
Nielsen, K., Randall, R. & Christensen, K. B. (2010) Does training managers enhance
the effects of implementing team-working? A longitudinal, mixed methods field study.
Human Relations, 63 (11), 1719-1741,
Nikandrou, I., Brinia, V. & Bereri, E. (2009) Trainee perceptions of training transfer:
an empirical analysis. Journal of European Industrial Training, 33 (3), 255-270.
Olafson, L., Quinn, L. F. & Hall, G. E. (2005) Accumulating gains and diminishing risks
during the implementation of best practices in a teacher education course. Teacher Education
Quarterly, 32 (3), 93-106.
23
Orr, D. & Mrazek, R. (2009) Developing the Level of Adoption survey to inform
collaborative discussion regarding educational innovation. Canadian Journal of Learning
and Technology, 35 (2). Available from:
http://www.cjlt.ca/index.php/cjlt/article/view/522/255 [Accessed 2nd February 2012].
Perkins, D. & Salomon, G. (2012) Knowledge to go: a motivational and dispositional view of
transfer. Educational Psychologist, 47 (3), 248-258.
Perry, E., Kulik, C. & Bustamante, J. (2012) Factors impacting the knowing-doing gap in
sexual harassment training. Human Resource Development International, 15 (5), 589-608.
Roberson, L., Kulik, C. & Pepper, M. (2009) Individual and environmental factors
influencing the use of transfer strategies after diversity training. Group & Organization
Management, 34 (1), 67-89.
Roberts, G., Becker, H. & Seay, P. (1997) A process for measuring adoption of innovation within the
supports paradigm. Journal of the Association for Persons with Severe Handicaps, 22 (2), 109-119.
Rodriguez, B. C. P. & Armellini, A. (2013) Interaction and effectiveness of corporate elearning programmes. Human Resources Development International, 16 (4), 480-489.
Rout, K., Priyadarshani, N., Hussin, Z., Pritinanda, A., Mamat, W. H. B. & Zea, G. L. (2010)
Implementation of new sixth form geography curriculum: concerns and levels of use of
teachers in Malayasia. International Journal of Educational Administration, 2 (1), 63-72.
Salas, E., Tannenbaum, S., Kraiger, K. & Smith-Jentsch, K. (2012) The science of training
and development in organizations: what matters in practice. Psychological Science in the
Public Interest, 13 (2), 74-101.
Saylor, P. R. (1998) Transfer management interventions: Environmental influences and
individual characteristics that affect implementation of staff development initiatives. Ph.D
University of Connecticut, USA.
Schindler, L. & Burkholder, G. (2014) A mixed methods examination of the influence of
dimensions of support on training transfer. Journal of Mixed Methods Research.
[Preprint] Available from:
http://mmr.sagepub.com/content/early/2014/11/06/1558689814557132.full.pdf+html
[Accessed: 3rd February 2015].
Strickland, D., Coles, C. & Southern, L. (2013) JobTIPS: a transition to employment
program for individuals with autism spectrum disorders. Journal of Autism
Developmental Disorders, 43 (10), 2472-2483.
Taylor, M., Ayala, G. & Pinsent‐Johnson, C. (2009) Understanding learning transfer in
employment preparation programmes for adults with low skills. Journal of Vocational
Education & Training, 61 (1), 1-13.
Tews, M. & Tracey, J. (2009) Helping managers help themselves: the use and utility of
on-the-job interventions to improve the impact of interpersonal skills training. Cornell
Hospitality Quarterly, 50 (2), 245-258.
24
Tunks, J. & Weller, K. (2009) Changing practice, changing minds, from arithmetical to
algebraic thinking: an application of the concerns-based adoption model (CBAM).
Educational Studies in Mathematics, 72 (2), 161-183. Available from:
http://www.jstor.org/stable/40284616 [Accessed 2nd February 2015].
Turcotte, D., Lamonde, G. & Beaudoin, A. (2013) Evaluation of an in-service training
program for child welfare practitioners. Research on Social Work Practice, 19 (1), 31-41.
Volet, S. (2013) Extending, broadening and rethinking existing research on transfer of
training. Educational Research Review, 8, 90-95.
Wang. W. 2014 Teachers’ Stages of Concern and Levels of Use of a curriculum innovation in
China. International Journal of English Language Teaching, 1 (1), 22-31.
Watkins, K., Lyso, I. & deMarrais, K. (2011) Evaluating executive leadership
programs: a theory of change approach. Advances in Developing Human Resources, 13
(2), 208-239.
Weber, K. E. (2013) An analysis of faculty development levels of use outcomes at one higher
education institution. Ph.D Thesis, University of Dayton, USA.
25
Download