Stage-based Measures of Implementation Components

advertisement
Stage-Based Measures of Implementation Components
October 2010
National Implementation Research Network
Dean Fixsen, Karen Blase, Sandra Naoom, & Melissa Van Dyke
Frank Porter Graham Child Development Institute
University of North Carolina at Chapel Hill
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 1
Stage-Based Measures of Implementation Components
October 2010
National Implementation Research Network
Dean Fixsen, Karen Blase, Sandra Naoom, & Melissa Van Dyke
With the identification of theoretical frameworks resulting from a synthesis of the implementation evaluation literature, there has been
a need for measures of the implementation components to assess implementation progress and to test the hypothesized relationships
among the components. Reliable and valid measures of implementation components are essential to planning effective
implementation supports, assessing progress toward implementation capacity, and conducting rigorous research on implementation.
Policy, practice, and science related to implementation can be advanced more rapidly with practical ways to assess implementation.
Since the beginnings of the field, the difficulties inherent in implementation have "discouraged detailed study of the process of
implementation. The problems of implementation are overwhelmingly complex and scholars have frequently been deterred by
methodological considerations. ... a comprehensive analysis of implementation requires that attention be given to multiple actions over
an extended period of time" (Van Meter & Van Horn, 1975, p. 450 - 451; see a similar discussion nearly three decades later by
Greenhalgh, Robert, MacFarlane, Bate, & Kyriakidou, 2004). Adding to this complexity is the need to simultaneously and practically
measure a variety of variables over time, especially when the implementation variables under consideration are not well researched.
Recent reviews of the field (Ellis, Robinson, Ciliska, Armour, Raina, Brouwers, et al., 2003; Greenhalgh et al., 2004) have concluded
that the wide variation in methodology, measures, and use of terminology across studies limits interpretation and prevents metaanalyses with regard to dissemination-diffusion and implementation studies.
Recent attempts to analyze components of implementation have used 1) very general measures (e.g. Landenberger & Lipsey, 2005;
Mihalic & Irwin, 2003) that do not specifically address core implementation components, 2) measures specific to a given innovation
(e.g. Olds, Hill, O'Brien, Racine, & Moritz, 2003; Schoenwald, Sheidow, & Letourneau, 2004) that may lack generality across
programs, or 3) measures that only indirectly assess the influences of some of the core implementation components (e.g. Klein, Conn,
Smith, Speer, & Sorra, 2001; Panzano, et al., 2004).
The following assessments are specific to “best practices” extracted from: 1) the literature, 2) interactions with purveyors who are
successfully implementing evidence-based programs on a national scale, 3) in-depth interviews with 64 evidence-based program
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 2
developers, 4) meta-analyses of the literature on leadership, and 5) analyses of leadership in education (Blase, Fixsen, Naoom, &
Wallace, 2005; Blase, Naoom, Wallace, & Fixsen, in preparation; Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Heifetz &
Laurie, 1997; Kaiser, Hogan, & Craig, 2008; Rhim, Kowal, Hassel, & Hassel, 2007).
For more information on the frameworks for Implementation Drivers and Implementation Stages derived by the National
Implementation Research Network, go to http://nirn.fpg.unc.edu. The synthesis of the implementation evaluation literature can be
downloaded from the NIRN website.
You have our permission to use these measures in any non-commercial way to advance the science and practice of implementation,
organization change, and system transformation. Please let us know how you are using the measures and let us know what you find so
we can all learn together. As you use these measures, we encourage you to do cognitive interviewing of key informants to help revise
the wording of the items to help ensure each item taps the desired aspect of each implementation component.
We ask that you let us know how you use these items so we can use your experience and data to improve and expand the survey.
Please respond to Dean Fixsen (contact information below). Thank you.
Dean L. Fixsen, Ph.D.
Senior Scientist
FPG Child Development Institute
CB 8040
University of North Carolina at Chapel Hill
Chapel Hill, NC 27599-8040
Cell # 727-409-1931
Reception 919-962-2001
Fax 919-966-7463
References
Blase, K. A., Fixsen, D. L., Naoom, S. F., & Wallace, F. (2005). Operationalizing implementation: Strategies and methods. Tampa, FL:
University of South Florida, Louis de la Parte Florida Mental Health Institute.
http://nirn.fmhi.usf.edu/resources/detail.cfm?resourceID=48
Ellis, P., Robinson, P., Ciliska, D., Armour, T., Raina, P., Brouwers, M., et al. (2003). Diffusion and Dissemination of Evidence-Based Cancer
Control Interventions. (No. Evidence Report /Technology Asessment Number 79. (Prepared by Oregon Health and Science University
under Contract No. 290-97-0017.) AHRQ Publication No. 03-E033. Rockville, MD: Agency for Healthcare Research and Quality.
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 3
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation Research: A synthesis of the literature. Tampa,
FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network
(FMHI Publication #231). http://nirn.fmhi.usf.edu/resources/detail.cfm?resourceID=31
Greenhalgh, T., Robert, G., MacFarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: Systematic
review and recommendations. The Milbank Quarterly, 82(4), 581-629.
Heifetz, R. A., & Laurie, D. L. (1997). The work of leadership. Harvard Business Review, 75(1), 124-134.
Kaiser, R. B., Hogan, R., & Craig, S. B. (2008). Leadership and the fate of organizations. American Psychologist, 63(2), 96-110.
Klein, K. J., & Sorra, J. S. (1996). The challenge of innovation implementation. Academy of Management Review, 21(4), 1055-1080.
Klein, K. J., Conn, B., Smith, A., Speer, D. B., & Sorra, J. (2001). Implementing computerized technology: An organizational analysis. Journal of
Applied Psychology, 86(5), 811-824.
Landenberger, N. A., & Lipsey, M. W. (2005). The Positive Effects of Cognitive-Behavioral Programs for Offenders: A Meta-Analysis of Factors
Associated with Effective Treatment. Journal of Experimental Criminology, 1(4), 451-476.
Mihalic, S., & Irwin, K. (2003). Blueprints for Violence Prevention: From Research to Real-World Settings-Factors Influencing the Successful
Replication of Model Programs. Youth Violence and Juvenile Justice, 1(4), 307-329.
Olds, D. L., Hill, P. L., O'Brien, R., Racine, D., & Moritz, P. (2003). Taking preventive intervention to scale: The nurse-family partnership.
Cognitive and Behavioral Practice, 10, 278-290.
Panzano, P. C., & Roth, D. (2006). The decision to adopt evidence-based and other innovative mental health practices: Risky business?
Psychiatric Services, 57(8), 1153-1161.
Panzano, P. C., Seffrin, B., Chaney-Jones, S., Roth, D., Crane-Ross, D., Massatti, R., et al. (2004). The innovation diffusion and adoption research
project (IDARP). In D. Roth & W. Lutz (Eds.), New research in mental health (Vol. 16). Columbus, OH: The Ohio Department of Mental
Health Office of Program Evaluation and Research.
Rhim, L. M., Kowal, J. M., Hassel, B. C., & Hassel, E. A. (2007). School turnarounds: A review of the cross-sector evidence on dramatic
organizational improvement. Lincoln, IL: Public Impact, Academic Development Institute.
Schoenwald, S. K., Sheidow, A. J., & Letourneau, E. J. (2004). Toward Effective Quality Assurance in Evidence-Based Practice: Links Between
Expert Consultation, Therapist Fidelity, and Child Outcomes. Journal of Clinical Child and Adolescent Psychology, 33(1), 94-104.
Van Meter, D. S., & Van Horn, C. E. (1975). The policy implementation process: A conceptual framework. Administration & Society, 6, 445-488.
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 4
Tracking Implementation Progress
Many organizations and funders have come to understand that Full Implementation can be reached in 2 – 4 years with support from a
competent Implementation Team (skillful implementation efforts focused on multiple evidence-based programs or other innovations)
or Purveyor (skillful implementation efforts focused on focused on one evidence-based program). These organizations and funders
also realize that implementation is an active process with simultaneous work going on at many levels to help assure full and effective
uses of the evidence-based program. The following “Implementation Quotient” was developed to track implementation progress over
many years, and to track the return on investing in implementation capacity.
[NOTE: The use of the phrase “met fidelity criteria” in the following assumes that a good measure of performance/ fidelity assessment
has been developed for the evidence-based program. That is, performance/ fidelity assessment scores are highly correlated with
intended longer-term client/ consumer outcomes.]
Implementation Quotient
1. Determine the number of practitioner positions allocated to the innovation in the organization (Allocated Position N = ___).
2. Assign a score to each allocated practitioner position:
Score
a. Practitioner position vacant
= 0
b. Practitioner in position, untrained
= 1
c. Practitioner completed initial training
= 2
d. Practitioner trained + receives weekly coaching
= 3
e. Practitioner met fidelity criteria this month
= 4
f. Practitioner met fidelity criteria 10 of past 12 months
= 5
3.
4.
5.
6.
Sum the scores for all practitioner positions (Practitioner Position Sum = ___).
Divide the Practitioner Position Sum by the Allocated Position N.
The resulting ratio is the “Implementation Quotient” for that innovation in that organization.
Plot this ratio and post it to track implementation progress.
For one larger provider organization, the figure below shows the Implementation Quotient data for 10-years (20 6-month blocks of time = 120
months). The graph starts with the commencement of the Initial Implementation Stage. In this organization, Full Implementation (50% of the
practitioners meeting fidelity criteria) was reached for the first time after 4.5 years (block 9).
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 5
Implementation Progress (N = 41 Positions)
5.0
4.5
4.0
Full Implementation
Impl. Quotient
3.5
3.0
2.5
2.0
1.5
4.5 Years
1.0
0.5
0.0
1
2
3
4
5
6
7
8
9
10 11 12
13 14 15 16
17 18 19 20
Six-Month Blocks
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 6
Stage-based Assessments of Implementation
As the original (2008) set of implementation assessment items was used, it became apparent that respondents new to innovations were
unable to answer the questions with confidence. These experiences led to the following stage-based assessments of implementation.
Stages of Implementation
The research and practice reviews conducted by the National Implementation Research Network have identified four Stages of
Implementation:
1. Exploration Stage
2. Installation Stage
3. Initial Implementation Stage
4. Full Implementation Stage
In principle, the Stages are fairly straightforward. In practice, they are not linear and that complexity adds to the measurement
challenges. In practice, the Stages are fluid with provider organizations and human service systems going into and out of a Stage over
and over. Even skilled Implementation Teams and Purveyor groups find that it takes about 3 or 4 years to get an organization to the
point of Full Implementation (as defined here). Even with a lot of help, not all agencies that attempt implementation of an evidencebased program or other innovation ever meet the criteria for Full Implementation.
Note that the proposed criteria for Stages apply to a single, specified evidence-based program or innovation. A single organization
might be in the Full Implementation Stage with one innovation and in the Exploration Stage with another. Stages and assessments of
implementation are specific to each innovation and should not be viewed as characteristics of organizations.
It may require interviews with several key informants to find the information needed to determine the Stage of Implementation for the
identified evidence-based program or other innovation within an organization or human service system.
Exploration:
An organization/ group is 1) ACTIVELY CONSIDERING the use of an EBP or other innovation but has not yet decided to actually
begin using one. The group may be assessing needs, getting buy in, finding champions, contacting potential purveyors, or any number
of things -- but 2) THEY HAVE NOT DECIDED to proceed. Once the decision is reached to use a particular innovation, the
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 7
Exploration Stage ends (of course, in reality, it is not always such a neat and tidy conclusion; the Exploration Stage typically is revisited repeatedly in the first year or so).
Installation:
An organization group 1) HAS DECIDED to use a particular innovation and 2) IS ACTIVELY WORKING to get things set up to use
it. The group may be writing new recruiting ads and job descriptions, setting up new pay scales, re-organizing a unit to do the new
work, contracting with a purveyor, working on referral sources, working on funding sources, purchasing equipment, finding space,
hiring trainers and coaches, or any of a number of things -- but 3) THE FIRST PRACTITIONER HAS NOT begun working with the
first client/consumer using the new EBP/ innovation.
It seems that many agencies get into this Stage and find they do not have the resources/ desire to continue. These agencies might go
back to the Exploration Stage (provided the 2 criteria for that Stage are being met) or may abandon the process altogether.
Initial Implementation:
The timer on this Stage BEGINS the day 1) the first NEWLY TRAINED PRACTITIONER 2) attempts to USE THE NEW EBP/
INNOVATION 3) WITH A REAL CLIENT/ CONSUMER. There may be only one practitioner and only one client/ consumer, but
that is enough to say the Initial Implementation Stage has begun and is in process. If at a later time an organization has no trained
practitioner working with a client/consumer, the organization might be said to be back in the Installation Stage (provided the 3 criteria
for that Stage are being met at that time).
It seems that many agencies get into the Initial Implementation Stage and limp along for a while without establishing/ building in/
improving their capacity to do the implementation work associated with the evidence-based program (e.g. full use of the
Implementation Drivers including practitioner selection, training, coaching, and performance/ fidelity assessments; facilitative
administration, decision support data systems, and systems interventions; leadership). Consequently, most of these efforts are not very
successful in producing consumer outcomes and rarely are sustained (they come and go as champions and interested staff come and
go).
Full Implementation:
Determine the number of POSITIONS the organization has allocated as practitioners for the innovation. Determine the number of
practitioners that CURRENTLY meet all of the performance/ fidelity assessment criteria. FULL IMPLEMENTATION OCCURS
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 8
WHEN AT LEAST 50% OF THE ALLOCATED POSITIONS ARE FILLED WITH PRACTITIONERS WHO CURRENTLY
MEET THE FIDELITY CRITERIA.
If there is no fidelity assessment in place, Full Implementation cannot be reached. If there are 10 allocated positions and 6 vacancies,
Full Implementation cannot be reached. Note that Full Implementation lasts only as long as the 50% criterion is met -- this may be
only for one day initially since meeting fidelity once is no guarantee that it will ever occur again for a given practitioner. In addition,
practitioners come and go. When a high fidelity practitioner is replaced with a newly hired/ trained practitioner it takes a while and a
lot of coaching to generate high fidelity performance with the new practitioner. When there is a very demanding innovation (e.g.
residential treatment; intensive home based treatment) and fairly high practitioner turnover (e.g. average practitioner tenure less than 3
years), an organization may take a long time to reach Full Implementation.
Exploration and Installation Stage Assessments
To use the stage-based assessments of implementation, the evaluator must first determine the stage of implementation for the
innovation in an organization. There are no fixed rules to follow, so evaluators must use their good judgment.
We have divided the implementation assessments into two groups: one group generally is more appropriate for an organization
considering the use of or attempting for the first time to use an evidence-based program or other innovation (Exploration and
Installation Stage Assessments).
Early on, implementation capacity does not exist. Thus, asking questions and encouraging planning about some key components is
within the means of the respondents and is less daunting for respondents (gosh, look at all this stuff we have to do!).
Initial and Full Implementation Stage Assessments
The second group is more appropriate for an organization that has initiated the use of an evidence-based program or other innovation
and is attempting to improve the quality and expand the use of the innovation in the organization (Initial and Full Implementation
Stage Assessments).
Later on, practitioners, supervisors, and managers are more familiar with implementation methods and can rate not only the presence
but the strength of the components. Thus, Likert scales that include more of the implementation best practices are included in those
assessments.
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 9
Of course, if an organization already has implemented other evidence-based programs successfully and now is doing Exploration and
Installation Stage-related work with another, the Initial and Full Implementation Stage Assessments might be appropriate. Use good
judgment. There are no fixed rules to guide evaluators.
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 10
Exploration and Installation Stage Assessments of Implementation
As organization or community groups begin considering evidence-based programs or other innovations, they also need to begin
considering implementation supports for those innovations. Implementation supports often are ignored early on, just because their
importance is not common knowledge. Yet, full, effective, and sustained uses of innovations on a socially significant scale depend
upon taking the first right steps for implementation and for the innovation.
The template on the following two pages provide prompts for thinking about and planning for implementation supports for a given
evidence-based program or other innovation.
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 11
Competency Implementation Drivers Analysis and Discussion Template
EBP/EBI: ________________________________________
Selected Staff Cohort (e.g. Practitioners, Staff, Coaches, Leadership Teams, Administrators, Implementation Team):
Are resources
Who is/will be
Is there a measure Given the current
In looking across the Competency
Competency How important
is
this
Driver
in
and
materials
responsible
for
of
Driver
state
of
development
Drivers, how well integrated is this
Implementati
promoting
available from ensuring functional effectiveness?
of this Driver how
Driver (the greater the number of
on Drivers
fidelity and/or
others for this
use of the Driver
(yes/no/DK)
much work would be responsible entities, the greater the
positive
outcomes?
A = High
B = Medium
C = Low
Driver for this
staff cohort?
(yes/no/DK)
(e.g. timely use,
quality,
sustainability,
integration)?
required to
significantly improve
it?*
A = High
B = Medium
C = Low
Staff
Selection
Staff
Training
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 12
integration challenge and the greater the
threat to compensatory benefits)
A = Well integrated
B = Moderate
C = Poor
Competency
Implementati
on Drivers
How important
is this Driver in
promoting
fidelity and/or
positive
outcomes?
A = High
B = Medium
C = Low
Are resources
and materials
available from
others for this
Driver for this
staff cohort?
(yes/no/DK)
Who is/will be
responsible for
ensuring functional
use of the Driver
(e.g. timely use,
quality,
sustainability,
integration)?
Is there a measure
of Driver
effectiveness?
(yes/no/DK)
Given the current
state of development
of this Driver how
much work would be
required to
significantly improve
it?*
A = High
B = Medium
C = Low
Staff
Coaching
Staff
Performance
Evaluation
(Fidelity)
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 13
In looking across the Competency
Drivers, how well integrated is this
Driver (the greater the number of
responsible entities, the greater the
integration challenge and the greater the
threat to compensatory benefits)
A = Well integrated
B = Moderate
C = Poor
Organizational Implementation Drivers Analysis and Discussion Template
EBP/EBI: ________________________________________
Selected Administrative/Team Cohort(s) (e.g. Organization, State, Regional Entity, Implementation Team):
Who is/will be
Is there a measure
Given the current state
Organizational How important is Are resources
this
Driver
in
and
materials
responsible
for
of
Driver
of development of this
Implementation
promoting fidelity available from
ensuring
effectiveness?
Driver how much work
Drivers
and/or positive
others for this
functional use of
(yes/no/DK)
would be required to
outcomes?
Driver?
the Driver (e.g.
significantly improve
A = High
(yes/no/DK)
timely use,
How might we know it?*
B = Medium
quality,
if the Driver is
A = High
C = Low
sustainability,
serving its
B = Medium
integration)?
functions?
C = Low
*See Best Practices
Facilitative
Administration
Systems
Interventions
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 14
In looking across the
Organizational Drivers,
how well informed are
these Drivers by the
Competency Drivers?
(Integrated so that they
can be compensatory as
needed).
A = Well integrated
B = Moderate
C = Poor
Organizational
Implementation
Drivers
How important is
this Driver in
promoting fidelity
and/or positive
outcomes?
A = High
B = Medium
C = Low
Are resources
and materials
available from
others for this
Driver?
(yes/no/DK)
Who is/will be
responsible for
ensuring
functional use of
the Driver (e.g.
timely use,
quality,
sustainability,
integration)?
Is there a measure
of Driver
effectiveness?
(yes/no/DK)
How might we know
if the Driver is
serving its
functions?
Given the current state
of development of this
Driver how much work
would be required to
significantly improve
it?*
A = High
B = Medium
C = Low
*See Best Practices
DecisionSupport Data
Systems
Leadership
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 15
In looking across the
Organizational Drivers,
how well informed are
these Drivers by the
Competency Drivers?
(Integrated so that they
can be compensatory as
needed).
A = Well integrated
B = Moderate
C = Poor
Priority Rating System for Driver Improvement and Action Planning
WORK
You can use this table for each Competency Driver to come up with a
Low Priority Rating System Priority Rating for Action Planning for each Driver. For example, if
A = High Priority
Low
B
the Training Driver is rated “Highly Important” for achieving
B = Medium Priority
Medium
C
fidelity and outcomes, and the improvement work to be done is “Low
C = Low Priority
High
C
burden”, then the table would guide you to consider this an Overall
High Priority for action planning “A”. On the other hand, if a Driver is viewed as being of “Low Importance” and the work to
change it is “High” you would land on a rating of “C”, leading you to delay or not engage in action planning to improve the
Driver. As always the Team makes the decision – not the “letter”, common sense prevails. This table is only an aide to
thinking and discussion.
Importance
High Medium
A
A
A
B
B
B
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 16
Implementation Drivers: An Initial Conversation
Based on the Work
of
Vestena Robbins, Ph.D., and Kari Collins, Ph.D.
Kentucky Department for Behavioral Health, Developmental and Intellectual Disabilities
Division of Behavioral Health
100 Fair Oaks Lane, 4E-D
Frankfort, KY 40621
The following document provides an organized set of questions for action planning. Each set of questions regarding a particular
Implementation Driver is designed to help organizations and community groups actively prepare for a quick and successful start to
their use of an evidence-based program or other innovation. With support from the National Implementation Research Network,
portions of this tool were pioneered by Jim Wotring and Kay Hodges when preparing organizations to bid on using an evidence-based
program in Michigan Department of Children’s Mental Health. Robbins and Collins have refined this tool over the past several years
and it now is a standard part of the state RFP process for assuring human service organizations are fully informed about an innovation
and prepared for its implementation prior to submitting a proposal to be funded to do the work envisioned by the state.
We thank Tena Robbins and Kari Collins for their work to develop this tool and for allowing its inclusion in this document.
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 17
Implementation Drivers: An Initial Conversation
When Planning or Reviewing a Practice, Program or Initiative
This document is meant to be used to help frame an initial conversation when planning to adopt a new practice, or develop a new
program or initiative. The implementation driver framework allows the opportunity to choose the direction of your system change
efforts in a planful and purposeful manner and in doing so be good stewards of limited resources. This document is not intended for
use as a performance indicator but rather as a planning guide. The National Implementation Research Network (NIRN) has
developed a tool that will facilitate an in-depth performance analysis of implementation and sustainability throughout the life of a new
practice/program (http://nirn.fmhi.usf.edu/). These implementation drivers are integrated and compensatory. Thus, a discussion of
the components could start with any one of them.
IMPORTANT:
1. Identify what practice, program or initiative you want to implement and sustain.
2. Identify which level(s) of change this initial conversation will need to include
during this stage of implementation and sustainability of your identified practice,
program or initiative. This may include:
a) Statewide system(s) infrastructure change
b) Regional or local system(s) infrastructure change
c) Statewide trainer and coaches/mentors/consultants for a behavior change
d) Regional or local trainers and coaches/mentors/consultants for a behavior
change
e) Statewide provider or participant behavior change
f) Regional or local provider or participant behavior change
3.
- Within the level(s) that you chose (above) - what roles/titles/representatives do
you anticipate will play a key role at this time in the implementation and
sustainability of this practice, program or initiative? These will be known as your
Primary Focus Participants.
- Within the level(s) that you chose (above) – what roles/titles/representatives do
you anticipate will play a supportive role or be involved later during the
implementation and sustainability of this practice, program or initiative? These will
be known as your Secondary Focus Participants.
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 18
(Note: You may add or change your list primary and secondary focus
participants as the driver process continues.)
********************************************************************************
FOCUS:
Our practice, program or initiative:

Our identified level(s) of change:

Our Primary Focus participant(s):

Our Secondary Focus participant(s):

Driver - Participant Recruitment and Selection:
 Who is qualified to carry out this [insert name of practice, program or initiative]?
 Beyond academic qualifications or experience factors, what personal characteristics must be a part of the participant selection
process (e.g., knowledge of the field, common sense, social justice, ethics, willingness to learn, willingness to intervene, good
judgment, etc.) to carry out this [insert name of practice, program or initiative]?
 What methods can you use to recruit and select participants in the [insert name of practice, program or initiative]? How will the
personal characteristics be assessed during the selection process (e.g. vignettes, role plays)?
 Are there “workforce development” issues that need to be taken into account? (e.g., availability of participants; ability to recruit
new participants; ability to train existing participants; ability to “grow your own” staff; etc.)
 Are there extra demands on the participants beyond the scope of [insert name of practice, program or initiative] that need to be
taken into account? (e.g., transportation issues to and from work; family/personal stressors; safety concerns; etc.)
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 19
Driver – Preparation and Training:
 Preparation - How can you assure that you prepare the participants with the background information, theory, philosophy and
values of this [insert name of practice, program or initiative]? Who else can you provide this background information to? (E.g.
supervisors, administrators, other external or internal supports, etc.?)
 Training - How can you provide the participants with a formal introduction to the key components and rationales of this [insert
name of practice, program or initiative]? Who else can you provide this training to? (E.g. supervisors, administrators, other
external or internal supports, etc.?)
 Training - How can you provide opportunities for participants to practice new skills and receive ongoing feedback in a safe
environment?
Driver - Consultation, Coaching or Mentoring:
 How can you use coaching, consultation or mentoring to support and monitor behavior change? How can you maintain this
coaching, consultation or mentoring throughout the life of this [insert name of practice, program or initiative]?
o Of the participants?
o Of the supervisors?
o Of the administrators needed to support implementation and sustainability?
Driver - Participant Evaluation: Data to Support Performance Management
 Participant’s Changes - How can you evaluate your participant’s successful use of:
o The qualities and characteristics that you identified as important in your selection criteria?
o The skills taught in training?
o The skills that will be reinforced and expanded through consultation, coaching or mentoring?
 Effectiveness of Supports for the Participants- How can you evaluate the effectiveness of the coaches, consultants or mentors?
 Participant’s Changes in Context of the Program, Practice or Initiative - Are there existing fidelity tools that can be used to
evaluate the above skills and/or effectiveness?
Driver – Program Evaluation - Data to Support Decision Making:
 Who can use the program evaluation data to support implementation and sustainability decision making? (E.g. coaches,
supervisors, administrators, external community groups, etc?)
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 20



What data can be used by these identified persons in order to determine the progress of the [insert name of practice, program or
initiative] implementation efforts (note: be sure to consider including the information from the participant evaluation driver)?
How can this data be used to determine the usefulness of the training and coaching?
Is there an overall assessment of the performance of the agency/organization/systems that will help assure continuing
implementation and outcomes of the core components of this [insert name of practice, program or initiative] over time?
Driver – Internal Administrative Supports (That Facilitate Implementation):
 Who can provide the strong leadership for this [insert name of practice, program or initiative] within your
agency/organization/system? (Consider: current and future supports.)
 Who can provide strong leadership as a “connector” to external agencies/organizations/systems to promote the use of this
[insert name of practice, program or initiative]?
 How can the administrative leadership use data (see Program Evaluation driver) to inform their decisions and support the
infrastructure necessary for the ongoing implementation and sustainability of this [insert name of practice, program or initiative]?
 How can the leadership work to integrate and keep improving these implementation drivers throughout the life of this [insert
name of practice, program or initiative]?
Driver – External Systems Interventions:
 Who can provide strong external agency/organization/system leadership for this [insert name of practice, program or initiative]
(Consider: current and future supports.)
 In order to support the implementation and sustainability efforts, what strategies are in place or will need to be created for the
[insert name of practice, program or initiative] to work with external systems:
o To obtain financial support?
o To obtain other necessary agency/organizational/system support?
o To obtain human resource support?
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 21
Implementation Drivers
Best Practices Analysis and Discussion
Tool
Adapted from
©National Implementation Research Network
Karen A. Blase, Melissa K. Van Dyke, Michelle Duda, Dean L. Fixsen
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 22
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 23
September 2010
© 2010 Dean Fixsen and Karen Blase, National Implementation Research Network
Page 25
Implementation Drivers Best Practices Analysis and Discussion Tool
Adapted from
©National Implementation Research Network
Karen A. Blase, Melissa K. Van Dyke, Michelle Duda, Dean L. Fixsen,
September 2010
The Implementation Drivers are processes that can be leveraged to improve
competence and to create a more hospitable organizational and systems environment for
an evidence-based program or practice (Fixsen, Naoom, Blase, Friedman, & Wallace,
2005). Since sound and effective implementation requires change at the classroom,
building, District, as well as at State and Federal levels, these processes must be
purposeful to create change in the knowledge, behavior, and attitudes of all the
educational professionals and partners involved.
A pre-requisite for effective use of the Implementation Drivers is a well
operationalized, evidence-based intervention, program or practice. The more clearly the
core intervention components are defined and validated through research (e.g. fidelity
correlated with outcomes; dosage and outcome data), the more clearly the
Implementation Drivers can be focused on bringing these core intervention components
“to life” and sustaining and improving them in context of classrooms, schools, and
Districts.
The Implementation Drivers are reviewed here in terms of accountability and
‘best practices’ to improve and achieve competence and confidence of the persons who
will be involved in implementing the new way of work (e.g. teachers, building and
District administrators, supervisors, coaches, etc.) and the schools, Districts, and SEAs
that host and support Evidence-based Programs, Practices and Frameworks.
Regional Implementation Teams, with members who know the intervention well,
can use this tool as a way to discuss the roles and responsibilities at the building, District,
State and TA level in which they are guiding. Engaging TA providers and program
developers in this process with those who are charged with successful implementation at
the school and District level can yield a useful and enlightening discussion that will not
only impact program quality but also programmatic sustainability.
The Team using the Checklist also will want to discuss the importance and
perceived cost-benefit of fully utilizing the best practices related to each Driver as well as
the degree to which the Team has ‘control’ over each Driver and the associated ‘best
practices’. When the best practices cannot be adhered to, then the Team needs to be
confident that weaknesses in one Driver are being compensated for by robust application
of other Drivers. For example, if skill-based training is needed but is not offered with
qualified behavior rehearsal leaders who know the intervention well; then coaches will
have increased responsibility to develop the basic skills of the persons they are coaching.
Overall, these Drivers are viewed through an Implementation Lens – after all
most organizations would say that they already recruit and select staff, provide
orientation and some training, supervise their staff, etc. But what do these Drivers look
like when they are focused on Effective Implementation Practices designed to create
practice, organizational, and systems change at the classroom, building, District, and
State level.
26
COMPETENCY DRIVER - Recruitment and Selection of Staff:
Who is responsible for this driver?
-
Does a TA Center or Purveyor offer guidance/material/assistance related to this
Driver?
-
Who is responsible for developing job descriptions?
-
Who is responsible for developing interview protocols?
-
Who is responsible for conducting interviews?
To what extent are best practices being used?
In
Partially Not In Notes:
Place In Place Place
Accountability for developing recruitment and
selection processes and criteria is clear (e.g. lead
person designated and supported)
Job description clarity re: accountability and
expectations
Pre-Requisites are related to “new practices” and
expectations (e.g. basic group management skills)
Interactive Interview Process:



Behavioral vignettes and Behavior Rehearsals
o
o
o
o
o
o
Assessment of ability to accept feedback
Assessment of ability to change own
behavior
Interviewers who understand the skills and abilities
needed and can assess applicants accurately.
Feed forward of interview data to training staff &
administrators & coaches (integration)
Feedback from exit interviews, training data,
turnover data, opinions of administrators & coaches,
and staff evaluation data to evaluate effectiveness of
this Driver
Best Practice Scores - Percent of Recruitment and
Selection Items in each column
What benefits might be gained by strengthening this driver (at what cost)?
o
o
o
What are the “next right steps” for further developing this driver?
27
COMPETENCY DRIVER - Training:
Who is responsible for this driver?
-
Does a TA Center or Purveyor offer guidance/material/assistance related to this
Driver?
-
Who ensures that staff receives the required training?
-
Who delivers the training?
-
Who monitors the quality of training?
To what extent are best practices being used?
In
Partially Not In Notes:
Place In Place Place
Accountability for delivery and quality monitoring of
training is clear (e.g. lead person designated and
supported)
Timely (criteria: Training occurs before the person
attempts to or is required to use the new program or
practice)
Theory grounded (adult learning principles used)
Skill-based



Behavior Rehearsals vs. Role Plays
Qualified Rehearsal Leaders who are Content
Experts
Practice to Criteria
o
o
o
Feed Forward of pre/post data to
Coaches/Supervisors
Feedback of pre/post data to Selection and
Recruitment
Outcome data collected and analyzed (pre and post
testing) of knowledge and/or skills
Trainers have been trained and coached
Fidelity measures collected and analyzed related to
training (e.g. schedule, content, processes,
qualification of trainers)
Best Practice Scores - Percent of Training Items in
each column
What benefits might be gained by strengthening this driver (at what cost)?
28
What are the “next right steps” for further developing this driver?
COMPETENCY DRIVER - Supervision and Coaching:
Who is responsible for this driver?
-
Does a TA Center or Purveyor offer guidance/material/assistance related to this
Driver?
-
Who hires coaches?
-
Who trains coaches?
-
Who monitors the quality of the coaching?
-
Who provides support for coaches?
To what extent are best practices being used?
In
Partially Not In Notes:
Place In Place Place
Accountability for development and monitoring of
quality and timeliness of coaching services is clear
(e.g. lead person designated and supported)
Written Coaching Service Delivery Plan
Uses multiple sources of information for feedback
Direct observation of implementation (in person,
audio, video)
Coaching data reviewed and informs improvements
of other Drivers
Accountability structure and processes for Coaches


Adherence to Coaching Service Delivery Plan is
regularly reviewed
Multiple sources of information used for
feedback to coaches
o Satisfaction surveys from those being
coached
o Observations of expert/master coach
o Fidelity measures of those being coached
as key coaching outcome












29

Best Practice Scores - Percent of Supervision/Coaching
Items in each column



What benefits might be gained by strengthening this driver (at what cost)?
What are the “next right steps” for further developing this driver?
COMPETENCY DRIVER - Performance Assessment - Fidelity:
Who is responsible for this driver?
-
Does a TA Center or Purveyor offer guidance/material/assistance related to this
Driver?
-
How was the measure of fidelity developed? Is it research-based?
-
Who is responsible for assessing fidelity? Are the processes practical?
-
Who reviews the fidelity data?
To what extent are best practices being used?
Accountability for fidelity measurement and
reporting system is clear (e.g. lead person designated
and supported)
Transparent Processes – Proactive staff orientation to
the process and procedures
Fidelity measures are correlated with outcomes; are
available on a regular basis and used for decisionmaking
Fidelity measurement and reporting system is
practical and efficient
Use of Appropriate Data Sources (e.g. competency
requires observation)
Positive recognition processes in place for
participation
Fidelity data over time informs modifications to
implementation drivers (e.g. how can Selection,
Training, and Coaching better support high fidelity)
Best Practice Scores - Average Percent of
Performance Assessment/Fidelity Items in each
column
In
Partially Not In Notes:
Place In Place Place




























What benefits might be gained by strengthening this driver (at what cost)?
30
What are the “next right steps” for this driver?
ORGANIZATION DRIVER - Decision Support Data Systems:
Who is responsible for this driver?
-
Does a TA Center or Purveyor offer guidance/material/assistance related to this
Driver?
-
Who determines the outcome measures to be collected?
-
Who is responsible for developing the system to collect the data?
-
Who is responsible for collecting the data?
Who reviews the outcome data?
To what extent are best practices being used?
Accountability for measurement and reporting
system is clear (e.g. lead person designated and
supported)
Includes intermediate and longer-term outcome
measures
Includes process measures (fidelity)
Measures are “socially important” (e.g. academic
achievement, school safety)
Data are:
 Reliable (standardized protocols, trained data
gatherers)
 Reported frequently (e.g. weekly, quarterly)
 Built into practice routines
 Collected at and available to actionable units
In
Partially Not In Notes:
Place In Place Place
































31


(e.g. grade level, classroom, student “unit”)
Widely shared with building and District
personnel
Shared with family members and community

Used to make decisions (e.g. curricula, training
needed, coaching improvements)
Best Practice Scores - Average Percent of Decision
Support Data System Items in each column
















What benefits might be gained by strengthening this driver (at what cost)?
What are the “next right steps” for this driver?
32
ORGANIZATION DRIVER - Facilitative Administrative Supports:
Who is responsible for this driver?
-
Does a TA Center or Purveyor offer guidance/material/assistance related to this
Driver?
-
Who oversees the integration of the Drivers?
-
Who ensures that practice-level perspectives about “what’s working well” and
“what’s getting in the way” are communicated to building, District, or State
leadership?
-
Who is involved in addressing organizational barriers that impede the full and
effective use of this program?
To what extent are best practices being used?
In
Place
Partially Not In Notes:
In Place Place
A Building/District Leadership and Implementation
Team is formed
The Building/District Leadership and
Implementation Team has Terms of Reference that
include communication protocols to provide
feedback to the next level “up” and describes from
whom feedback is received (PEP-PIP protocol)
The Team uses feedback and data to improve
Implementation Drivers
Policies and procedures are developed and revised
to support the new ways of work
Solicits and analyzes feedback from staff
Solicits and analyzes feedback from “stakeholders”
Reduces internal administrative barriers to quality
service and high fidelity implementation
Best Practice Scores - Average Percent of
Facilitative Administration Items in each column
What benefits might be gained by strengthening this driver (at what cost)?
What are the “next right steps” for this driver?
33
ORGANIZATION DRIVER - Systems Intervention
Who is responsible for this driver?
-
Does a TA Center or Purveyor offer guidance/material/assistance related to this
Driver?
-
Who is responsible for building the necessary relationships in the building, District,
and in the community to implement the program effectively?
-
Who is involved in addressing systems barriers that impede the full and effective use
of this program?
To what extent are best practices being used?
In
Place
Partially Not In Notes:
In Place Place
Building Leadership and Implementation is formed
and supported by the District
Leadership matches level needed to intervene
Engages and nurtures multiple “champions” and
“opinion leaders”
Objectively documents barriers and reports barriers
to next level “up”
Makes constructive recommendations to next level
“up” to resolve barriers
Develops formal processes to establish and use PEP
– PIP cycles (e.g. linking communication protocols
to give and receive feedback from the next level
“down” and “up”)
Creates time-limited, barrier busting capacity by:
o
o
 Using Transformation Zones
o
o
 Doing usability testing (short PDSA
cycles with small groups)
Creates optimism and hope by communicating
successes
Average Percent of Systems Intervention Items in
each column
What benefits might be gained by strengthening this driver (at what cost)?
o
o
What are the “next right steps” for this driver?
34
Quality Implementation Score Summary:
Average Percent of Items Across Seven
Implementation Drivers for each column
Summary of “next right steps” by Driver:
Recruitment and Selection:
Pre-Service and In-Service Training:
Supervision and Coaching:
Performance Assessment - Fidelity:
Decision Support Data Systems:
Facilitative Administrative Supports:
Systems Intervention at the Organizational Level:
Given the findings from the analysis of the Implementation Drivers, the following action
steps have been prioritized by this team:
Next Right Step
Responsible Person
Completion Date
35
Learn more about the science and practice of Implementation at: www.scalingup.org by reading the
Scaling Up Briefs and more about implementation science at http://nirn.fpg.unc.edu/
Access the monograph by Fixsen, Naoom, Blase, Friedman, & Wallace (2005). Implementation Research:
A Synthesis of the Literature at: http://www.fpg.unc.edu/~nirn/resources/publications/Monograph/
36
Initial Implementation and Full Implementation Stage Assessments
National Implementation Research Network
Frank Porter Graham Child Development Institute
University of North Carolina at Chapel Hill
After an organization or human service system has begun their attempt to use an
evidence-based program or other innovation, Implementation Drivers can be assessed in
practice. At this point, the presence and strength of each Implementation Driver can be
assessed at regular intervals.
It is recommended that each assessment of Implementation Drivers be correlated with
proximal practitioner performance/ fidelity assessment outcomes and with eventual
client/ consumer outcomes. The essential outcome of implementation done well is
consistently high fidelity performance by practitioners. The essential outcome of high
fidelity performance by practitioners is consistently desirable outcomes for the children,
families, individuals, and communities receiving evidence-based or other innovative
services.
Variations of the following items have been used by Giard in her evaluations of statewide
implementations of an evidence-based program, and by Panzano and colleagues (2004;
2006) as part of an evaluation of the uses of a variety of evidence-based programs in
mental health settings. Recently, considerable work has been done by Joshua Patras and
colleagues at the Atferdssenteret - Norsk senter for studier av problematferd og innovativ
praksis - Universitet i Oslo (The Norwegian Center for Child Behavioral Development,
University of Oslo) to establish the reliability and validity of the items recommended
below. Patras et al. interviewed 213 practitioners, supervisiors, and managers associated
with two established evidence-based programs in Norway. They found Cronbach alphas
in the 0.80 range for most of the implementation driver scales. They also found the
scales discriminated between the different implementation approaches used for the two
evidence-based programs. Further testing continues in Norway. Meanwhile, Sarah Kaye
at the University of Maryland and her colleagues nationally are using the following items
in studies of the uses of a variety of evidence-based programs and other innovations in
child welfare systems in all 50 states and tribal nations. The “implementation climate”
items are adapted from the work of Klein & Sorra (1996) who originally used the items in
a business setting.
37
Implementation Driver Assessment
Key informants
The following set of items is intended to be used with three groups of key informants
within a human service organization. The information from all key informants should be
obtained within a four-week period to help assure a complete view of implementation
progress at one point in time within the organization. The measures can be repeated to
assess initial progress toward full implementation and to assess changes in
implementation over time (implementation supports fluctuate over time). The key
informants are:
1. Practitioners who are providing services to children, families, or adults.
Depending upon the number of practitioners using an innovation in an
organization, it may be useful to randomly sample 10 to 15 practitioners at each
point in time.
2. Supervisors/ Coaches who provide oversight and advice to the practitioners who
are asked to complete this survey.
3. Decision makers who are responsible for the overall organization or the portion of
the organization in which the practitioners and supervisors/ coaches work.
Decision makers are those people who have nearly-independent authority to make
changes in budgets, structures, and personnel roles and functions within an
organization.
The wording of items may need to be changed to reflect the usage of language or
identification of particular roles within a given organization.
Name of the innovation:______________________________________
NOTE: Responses should be specific to one particular innovation. If the organization is
implementing more than one innovation, a separate survey is required for each.
DEFINITIONS
1. Innovation
a. The practice or program that is the subject of this survey. Innovations require
new ways of working with consumers or other recipients of services provided
by a practitioner. NOTE: The practice or program may or may not have a
strong evidence-base to support it. The implementation questions below are
relevant to any attempt to establish any new ways of work in any organization.
2. Practitioner
a. The clinician or other person who is providing direct services to consumers or
others. A practitioner is a person who is being asked to use an innovation.
38
Enter 1 - 9 next to each item to indicate the extent to which you agree the statement is
true for your organization.
1 = Strongly Disagree
2 = Disagree
3 = Somewhat Disagree
4 = Neither Agree nor Disagree
5 = Somewhat Agree
6 = Agree
7 = Strongly Agree
8 = Does Not Exist in our organization
9 = Don’t Know
Practitioner Selection
When an innovation is introduced to an organization (or sustained over time as staff
expansion or turnover occurs), practitioners must be employed to interact with
consumers using the new ways of work. The items in this section ask about the activities
related to recruiting, interviewing, or hiring new practitioners or existing practitioners
within the organization.
Within the past six months:
NOTE: A shorter time frame may be used to assess implementation progress more often
during each year. For example, surveying key informants every four months will provide
three data points a year.
1. Practitioners already employed by the provider organization are appointed to
carry out this innovation. For example, without much discussion existing staff
have been reassigned to use the innovation.
2. Practitioners already employed by the provider organization voluntarily applied to
carry out this innovation. For example, there was a process where currently
employed practitioners could learn about the innovation and decide if they wanted
to make use of it in their work with consumers.
3. New staff members have been specially hired to be the practitioners using the
innovation. That is, a new position was created and a new person was recruited
and employed to be a practitioner.
4. Interviews to determine whether or not to employ a person to be a practitioner for
this innovation have been conducted in-house by the provider organization's own
staff. Note that this question applies to interviews of practitioners who
voluntarily applied from within the organization as well as to those candidates
who applied to be new employees of the organization.
5. Interviews to determine whether or not to employ a person to be a practitioner for
this innovation have been conducted by one or more persons who are expert in the
innovation. For example, the interviewers are part of the research group that
developed the innovation or are specially trained to do interviews for this
innovation.
39
6. Earlier in their career, nearly every person involved in interviewing candidates
had been a practitioner using the innovation.
7. Interviews to determine whether or not to employ a person to be a practitioner for
this innovation primarily have been focused on questions specifically related to
the innovation.
8. Interviews to determine whether or not to employ a person to be a practitioner for
this innovation have included role plays to elicit responses from candidates. For
example, a role play situation might ask the candidate to respond to a situation
that is acted out by the persons doing the interview. The situation might be
typical of the kinds of issues a practitioner faces every day when using the
innovation.
9. Data regarding practitioner performance in employment interviews have been
collected and reported to management or a data collection unit.
Training
Innovations involve new ways of doing work with consumers and often require
practitioners to acquire new knowledge, skills, and abilities. These items ask about any
activities related to providing specialized information, instruction, or skill development in
an organized way to practitioners and other key staff in an organization.
Within the past six months:
1. Practitioners have been provided with specific preparation to carry out this
innovation.
2. Training for practitioners primarily has been provided in-house by the provider
organization's own staff.
3. Practitioner training primarily has been provided off site (e.g. a national or
regional training center; conference).
4. Practitioner training has been provided by one or more persons who are expert in
the innovation. For example, the trainers are part of the research group that
developed the innovation or are specially trained to do training for this
innovation.
5. Training for practitioners primarily has been focused on content specifically
related to the innovation.
6. Earlier in their career, nearly every trainer had been a practitioner using the
innovation.
7. Training for practitioners has included behavior rehearsals to develop knowledge
and skills to an established criterion. Behavior rehearsals are set up to allow the
practitioner to practice saying and doing aspects of the innovation they are
expected to use after training has ended.
8. Behavior rehearsals during training have included re-practice until a criterion for
skill acquisition has been reached (e.g. 80% of the components done properly).
9. Data regarding practitioner knowledge and performance relative to the innovation
has been assessed before and after training and reported to management or a data
collection unit.
40
Supervision/ Coaching
Practitioners often are supported by supervisors or coaches as they work with
consumers. These items ask about supervision/ coaching that may include personal
observation, instruction, feedback, emotional supports, some form of training on the job,
or debriefing sessions.
1. Each practitioner using this innovation has an assigned supervisor/ coach.
2. Supervision/ coaching for practitioners primarily is provided in-house by the
provider organization's own staff.
3. The supervisor/ coach for every practitioner is expert in the innovation. For
example, the coaches are part of the research group that developed the innovation
or are specially trained to coach practitioners using this innovation.
4. Earlier in their career, every supervisor/ coach had been a practitioner using the
innovation.
5. Supervision/ coaching primarily has been focused on helping practitioners
develop their knowledge and skills specific to the innovation being implemented.
6. Supervisors/ coaches have been careful to match the content of supervision with
the content of training.
7. Supervision/ coaching has occured on a regular schedule known to the
practitioner.
8. Supervision/ coaching has occured a minimum of once a week for each
practitioner who has been using the innovation for less than 6 months.
9. Supervision/ coaching for practitioners has included a considerable amount of
direct observation of clinical skills on the job.
10. Information and/or data regarding the results of practitioner supervision/ coaching
contacts have been routinely collected and reported to management or a data
collection unit.
Performance Assessment
Many organizations have some way to assess the quality and quantity of work done by
practitioners and others involved in providing services to consumers. The information
may be used for certification, merit pay increases, promotions, or decisions about
continued employment. These items ask about the nature and content of performance
assessments relative to practitioners’ use of the innovation in the organization.
Within the past six months:
1. The performance of each practitioner using this innovation has been evaluated
with respect to adherence. That is, the critical features of the innovation are
listed/ defined and a method is used to determine the practitioner’s use of each
critical feature.
2. The performance of each practitioner using this innovation has been evaluated
with respect to outcomes achieved. That is, the progress of each consumer being
served by a practitioner is measured.
3. Practitioner performance assessments has included direct observations and ratings
of knowledge, skills, and abilities.
41
4. Practitioner performance assessments have included opinions and ratings of
performance by consumers and stakeholders.
5. Nearly all of the practitioner performance assessment questions/ observations
have been specific to the innovation.
6. Practitioners have been well informed in advance regarding the purpose, content,
and methods used to carry out practitioner performance assessments.
7. Assessments of practitioners' performance have been conducted in-house by the
provider organization's own staff.
8. Assessments of practitioners’ performance have been conducted by individuals
who are specifically trained to evaluate the performance of practitioners using the
innovation.
9. Practitioners have received written results within 30 days of the performance
assessment.
10. Data regarding the results of practitioner performance assessments have been
routinely collected and reported to management or a data collection unit.
Decision Support Data Systems
Many organizations have some way to assess the overall performance of various units
and of the overall organization itself. The information may be used for internal or
external accountability purposes, quality improvement, or decisions about contracts and
services. These items ask about the nature and content of assessments relative to
decision making regarding the use of the innovation in the organization.
Within the past six months:
1. The provider organization has had a data collection and reporting system in place.
2. Assessments of organizational performance primarily have been conducted inhouse by the provider organization's own staff.
3. There have been specific protocols used for data collection and analysis (e.g.
specific measures, data collection routines, schedules for data collection)
4. There have been specific protocols used for data reporting (e.g. schedules, formats
for data reporting, schedule of meetings for discussion and interpretation of
results)
5. Organizational data collection measures primarily have been designed to acquire
information specific to the processes of the innovation.
6. Organizational data collection measures primarily have been designed to acquire
information specific to the outcomes of the innovation.
7. Information from data collection systems has been provided to practitioners at
least monthly.
8. Information from data collection systems has been provided to supervisors/
coaches at least monthly.
9. Information from data collection systems has been provided to managers and
directors at least quarterly.
42
Facilitative Administration
Many organizations establish structures and processes to support and actively pursue
agendas to encourage and support the use of an innovation by practitioners. These items
ask about any changes in the organization related to the use of the innovation.
Within the past six months:
1. Administrative practices and procedures have been altered to accommodate the
specific, identified needs of the innovation (e.g. personnel reporting
arrangements; accountability methods; financing methods).
2. Administrative policies have been altered to accommodate the specific, identified
needs of the innovation (e.g. revised policy and procedure manuals; modified
merit pay criteria).
3. Administrative staff (key directors, managers, and supervisors) have received
explicit training regarding their functions related to the innovation.
4. Adjustments have been made in organizational structures and roles specifically to
promote effective use of the innovation (e.g. alignment of internal organizational
systems to facilitate and support the work of practitioners, interviewers, trainers,
coaches, and performance assessors).
5. New administrative practices and procedures have been put in place to facilitate
the practice (e.g., new job descriptions and salary structures; new meeting
schedules, new Board committees; written implementation plan).
6. Administrative staff has routinely used data when making decisions about
changes in the organization.
7. Administrative staff has routinely used data when making decisions about staff
performance.
Systems Intervention
Organizations may work with the larger systems in the region and state to develop better
supports for the use of an innovation. These items ask about changes in external system
policies, management, or operating structures or methods in response to experiences
gained with the operations of an innovation.
Within the past six months:
1. Administrative staff of the provider organization (key directors, managers, and
supervisors) have actively worked to change external systems so they are more
hospitable to the specific methods, philosophy, and values of the innovation.
2. Administrative staff of the provider organization (key directors, managers, and
supervisors) have received explicit training with respect to specific approaches for
intervening in external systems.
3. Administrative staff of the provider organization (key directors, managers, and
supervisors) have secured adequate resources to initiate and use the innovation
effectively (e.g. assure appropriate referrals, sufficient funding, staff certification,
agency accreditation, consumer and stakeholder support, community support).
4. Administrative staff of the provider organization (key directors, managers, and
supervisors) have secured adequate resources to sustain the innovation effectively
43
(e.g. assure appropriate referrals, sufficient funding, staff certification, agency
accreditation, consumer and stakeholder support, community support).
Leadership
Organizations have leaders at various levels who make decisions that impact the way
practitioners work with consumers. These items ask about the nature of leadership
within the organization.
Within the past six months:
1. Leaders within the organization continually have looked for ways to align
practices with the overall mission, values, and philosophy of the organization.
2. Leaders within the organization have established clear and frequent
communication channels to provide information to practitioners and to hear about
their successes and concerns.
3. Leaders within the organization have convened groups and worked to build
consensus when faced with issues on which there was little agreement about how
to proceed.
4. Leaders within the organization have provided specific guidance on technical
issues where there was sufficient clarity about what needed to be done.
5. Leaders within the organization have been fair, respectful, considerate, and
inclusive in their dealings with others.
6. Leaders within the organization have been very good at focusing on the issues
that really matter at the practice level.
7. Leaders within the organization have been very good at giving reasons for
changes in policies, procedures, or staffing.
8. Leaders within the organization have been actively engaged in resolving any and
all issues that got in the way of using the innovation effectively.
9. Leaders within the organization have actively and routinely sought feedback from
practitioners and others regarding supports for effective use of the innovation.
10. Leaders within the organization have been actively involved in such things as
conducting employment interviews, participating in practitioner training,
conducting performance assessments of individual practitioners, and creating
more and better organization-level assessments to inform decision making.
44
Implementation Climate
Organizations have a “personality” that is reflected in the day to day operations of the
organization and the way staff members view their work.
The following items were adapted from Katherine Klein’s MRPTOO Survey Measures
referred to in her paper on Implementing Computerized Technology: An Organizational
Analysis (Klein, Conn, & Sorra, 2001). In Klein’s work, analysis was conducted at both
the organizational and individual level. At the individual level, Cronbach’s alpha was
reported as .83 and at the organizational level, Cronbach’s alpha was reported as .93.
Response scale: (1 = not true, 2 = slightly true, 3 = somewhat true, 4 = mostly true, and 5
= true). (R = Reverse scored.)
1.
2.
3.
4.
5.
6.
7.
This innovation is a top priority at this organization.
At this organization, this innovation takes a back seat to other projects. (R)
People put a lot of effort into making this innovation a success here.
People at this organization think that the implementation of this innovation
is important.
One of this organization’s main goals is to use this innovation effectively.
People here really don't care about the success of this innovation. (R)
In this organization, there is a big push for people to make the most of this
innovation.
45
Download