201209_ss_prandolph

advertisement
Measuring PostLicensure Competence
The Nursing Performance Profile
Research Team









Janine Hinton RN, Ph.D
Mary Mays Ph.D
Debra Hagler RN, Ph.D
Pamela Randolph RN, MS
Beatrice Kastenbaum RN, MSN,
CNE
Ruth Brooks RN, MS, BC
Nick DeFalco RN, MS
Kathy Miller RN, MS
Dan Weberg RN, MHI
Support


Funded by NCSBN CRE Grant
Supported by:
Scottsdale Community College
 Arizona State University
 Arizona State Board of Nursing

Statement of the Problem

A valid reliable practice
assessment is needed to
support intervention on the
public’s behalf when the pattern
of nursing performance results
in or is likely to result in patient
harm
Literature Review


Medical errors a leading cause
of death (IOM, 2000)
Written tests do not directly
measure performance (Auerwarakul,
Downing, Jaruratamrong, & Praditsuwan, 2005)

Multiple observations of a
nurse’s performance have
provided evidence of competent
practice (Williams, Klaman,& McGaghie, 2003)
Literature Review

High-fidelity simulation technology
allows the creation of reproducible
scenarios to evaluate nursing
performance (Boulet et.al., 2011; Kardong-Edgren, Adamson, &
Fitzgerald, 2010)

Nursing and Health care leaders
have called for performance
assessments to evaluate
competence and support
remediation (Benner, Stupen, Leonard, & Day, 2010; IOM,
2011)
Purpose of study

To develop and evaluate a highstakes simulation testing
process to measure minimally
safe nursing practice
competence and identify
remediation needs.
Methodology

Needed process to apply
sophisticated measures of validity
and reliability





Participants appeared in 3 simulation
videos
3 subject matter expert rated each video
on 41 measures of competency
Raters blind to participant ability,
experience, order of testing
Videos presented a range of safe and
unsafe performance
Obtained ratio level data suitable for
parametric, inferential statistical analysis
Filming Participant
Demographics







Criteria—newly licensed RNs less
than 3 years nursing experience
(N=21)
Average age=32
95% female
58% white, 16% black, 26% hispanic
79% AD; 21% BSN
Less than 3 years experience—
mean experience=1.05 years
Majority had some experience with
simulation 74%
Rater Demographics





Criteria--BSN and 3 years
experience and work in a role
that involves evaluating others
(N=4)
Average experience=12.5
Age 31-51
White, female
Education: 3 BSN, 1 MS
Instrument Development



Developed and established initial
validity/reliability before funding
TERCAP served as the theoretical
framework (Benner et.al.,2006; Woods & Doan-Johnson, 2003)
Survey items on NCSBN’s Clinical
Competency Assessment of Newly
Licensed Nurses were adapted (NCSBN,
2007)

Mapped to QSEN competencies
Categories of Items
(TERCAP)









Professional Responsibility
Client Advocacy
Attentiveness
Clinical Reasoning, noticing
Clinical Reasoning, understanding
Communication
Prevention
Procedural Competency
Documentation
Example of One Item
Category Competencies

Prevention
Infection control
 2 client identifiers
 Appropriate positioning
 Safe environment

Scoring—4 possibilities




Performance or action is consistent
with standards of practice and free
from actions that may place the
client at risk for harm
Fails to perform or performs in a
manner that exposes the client to
risk for harm
No opportunity to observe in the
scenario
Blank
Scoring test




No weighted items
No pass fail standard
Description of Nurse’s
performance across 9
categories of competency
Final rating of each item based
on inter-rater agreement—at
least 2 of 3 agree
Scenarios




3 sets of 3 scenarios scripted=9
Adult acute care, common
diagnoses
Each scenario had opportunities
to observe all performance
items
Each sim patient had hospitallike chart with information—labs,
history, MAR, orders
Simulation Testing/Rating

21 nurse performers= and 63 videos




Scenario Set 1=5 participants
Scenario Set 2=8 participants
Scenario Set 3=8 participants
Each video evaluated by 3 raters



189 rating instruments
41 items rated on each instrument
7,749 ratings
Analysis Procedures

Predictive Analysis Software (v 18.0.3 SPSS
Inc., Chicago, IL)

Frequency analysis to identify
instrument properties:




Used as intended
Interrater reliability
Sensitive to common practice errors
(construct validity)
Cronbach’s alpha (intercorrelation
among items) was used to measure
internal consistency
Analysis Procedures Cont

ANOVA used to
Assess ability of instrument to
distinguish between experienced
and inexperienced nurses
 Assess potential bias created by
administration methods

Results



Less than 1% of items left blank
or not observed—indicates
scenarios comprehensive
Interrater reliability—across all
41 items at least 2 raters agreed
on 99.12%
Internal consistency Cronbach’s
alpha=0.91-0.84 on 41 items
combined and separate
Results

Construct validity—pass rates
should mirror those in other
studies
Infection control—pass rate 57%
mainly due to lack of hand
hygiene
 Documentation—pass rate 29%-area of frequent concern in
practice

Results

Criterion validity

2 groups by nursing experience





<1 year or
1-3 years
2 way mixed ANOVA
Experienced nurses made fewer errors than new
nurses (p<0.001)
Significant in 6 of 9 categories






Attentiveness
Clinical Reasoning (noticing)
Clinical Reasoning (understanding)
Communication
Procedural Competency
Documentation
Comparison of Groups by
Category
Category
Inexperienced
Nurses
M
Professional
Responsibility
-.33
(S)
(.58)
Experienced
Nurses
M
-.22
(S)
p value
(.49)
*
Client Advocacy
-.57
(.81)
-.25
(.50)
*
Attentiveness
-.76
(1.00)
-.17
(.38)
0.002
-1.19
(1.33)
-.47
(.81)
Clinical Reasoning Noticing
Clinical Reasoning Understanding
0.01
-1.67
(1.28)
-.81
(.95)
0.005
Communication
-1.48
(1.44)
-.75
(.97)
0.03
Prevention
-1.57
(1.50)
-1.31
(.82)
*
-2.76
(2.32)
-1.19
(1.37)
Procedural
Competency
Documentation
0.002
-3.33
(.73)
-2.61
(1.20)
0.02
NPP Results
Inexperienced 0.5 year
1 year experience
Inexperienced 0.5 yr
2 year experience
Results

Test Bias
Scenario was not significant
 Categories was significant—some
competency categories more
difficult


Communication, prevention,
procedural competency and
documentation more difficult
Results

Test Bias continued
Scenario set—significant only for
documentation which may be
easier on Set 1
 Order of testing and practice effect
not significant
 Location of testing not significant

Summary







Instrument has adequate validity and
reliability
Raters used instrument as instructed
and in a reproducible manner
Items were highly interrelated
Sensitive to common errors
Inexperienced nurses made more
errors
Test not biased
Plots permit users to visualize
performance
Implications


Provides a valid explicit measure of
performance that regulatory Boards
could use along with other data to
determine if practice errors are a
one-time occurrence or a pattern of
high risk behavior
Potential uses in education and
practice to assess performance and
effect of educational intervention
Limitations




Volunteer subjects—not random or
representative
Sample size too small to support
confirmatory factor analysis of the
instruments construct validity
Tailored to specific context and
purpose
Limitations of simulation—non-verbal
and skin change cues missing—
suspend disbelief
Future Research

Funded by NCSBN for Phase II
Criterion Validity by comparing RN
self and supervisor ratings
 Compare to education,
certification
 Broader cross section of
experienced nurses recruited

References




Auewarakul, C., Downing, S. M., Jaturatamrong, U, and
Praditsuwan, R. (2005). Sources of validity evidence for
an internal medicine student evaluation system: An
evaluative study of assessment methods. Medical
Education, 39, 276-283.
Benner, P., Sutphen, M., Leonard, V., Day, L. (2010).
Educating Nurses: A Call for Radical Transformation.
Stanford, CA: Jossey-Bass.
Boulet, J. R., Jeffries, P. R., Hatala, R. A., Korndorffer, J.
R., Feinstein, D. M., & Roche, J. P. (2011). Research
regarding methods of assessing learning outcomes.
Simulation in Healthcare, 6(7), supplement, 48-51.
Institute of Medicine (IOM) (2011). The Future of
Nursing: Leading Change, Advancing Health.
Washington, DC: National Academies Press.
References



Institute of Medicine. (2000). To err is human: Building a
safer system. Washington, DC: National Academies
Press
Kardong-Edgren, S., Adamson, K. A., & Fitzgerald, C.
(2010). A review of currently published evaluation
instruments for human patient simulation. Clinical
Simulation in Nursing, 6(1).
doi:10.1016/j.ecns.2009.08.004
Williams, R. G., Klamen D. A., & McGaghie, W. C.
(2003). Cognitive, social and environmental sources
of bias in clinical performance ratings. Teaching and
Learning in Medicine, 15(4), 270-292.
Download