Measuring Patients’ Experiences With Individual Physicians: Are We Ready for Primetime?

advertisement
Measuring Patients’ Experiences With
Individual Physicians:
Are
We
Ready for Primetime?
___________________________________________________________________________
Dana Gelb Safran, ScD
The Health Institute
Institute for Clinical Research and Health Policy Studies
Tufts-New England Medical Center
Presented at:
Academy Health Annual Research Meeting
San Diego, CA
Focusing on Physicians
X
X
Survey-based measurement of patients’ experiences with
individual physicians is not new.
X
X
What’s new: Efforts to standardize and potential for public
reporting.
X
X
IOM report Crossing the Quality Chasm gave “patient-centered
care” a front row seat.
X
X
Methods and metrics have been honed through 15 years of
research and through several recent large-scale demonstration
projects
X
X
But putting these measures to use raises many questions about
feasibility and value.
7 June 2004
Commonwealth Fund and Robert Wood Johnson Foundation
Ambulatory Care Experiences Survey Project
X
Statewide demonstration project in Massachusetts
X
Collaboration:
Y
Y
Y
Y
6 Payers
6 Physician Network Organizations
Massachusetts Medical Society
Massachusetts Health Quality Partners
Principal Questions of the Statewide Pilot
X
What sample size is needed for highly reliable estimate of
patients’ experiences with a physician?
X
What is the risk of misclassification under varying
reporting frameworks?
X
Testing the feasibility and value of measuring patients’
experiences with individual primary care physicians and
practices
X
Is there enough performance variability to justify
measurement?
X
Primary impetus: plans seeking to standardize surveys
X
X
IOM “Chasm” report further propelled the work
How much of the measurement variance is accounted for
by physicians as opposed to other elements of the system
(practice site, network organization, plan)?
Measures from the
Ambulatory Care Experiences Survey (ACES)
Sampling Framework
Eastern, MA
Central, MA
Western, MA
Tufts, BCBSMA, HPHC,
Medicaid
BCBSMA, Fallon,
Medicaid
BCBSMA, HNE,
Medicaid
PNO1
PNO2
PNO3
PNO4
PNO5
Organizational
Access
Trust
PNO6
Continuity
Interpersonal
Treatment
Primary
Care
Communication
34 Sites
143 Physicians
23 Sites
35 Physicians
10 Sites
37 Physicians
Both commercially insured & Medicaid patients sampled
Only commercially insured patients sampled
Dana Gelb Safran, Sc.D.
The Health Institute, Tufts-New England Medical Center,
Boston, MA
·longitudinal
·visit-based
Comprehensiveness
·whole-person
Integration orientation
•team
•specialists
•lab
·health promotion/
patient
empowerment
Phone: 617-636-8611
dsafran@tufts-nemc.org
1
Sample Size Requirements for Varying
Physician
- Level Reliability Thresholds
What is the Risk of Misclassification?
Number of Responses per Physician Needed to Achieve Desired
MD-Level Measurement Reliability
Reliability:
Reliability:
Reliability:
0.7
0.8
0.95
X
Not simply 1- αMD
X
Depends on:
ORGANIZATIONAL/STRUCTURAL FEATURES OF CARE
Organizational
access
23
39
185
Visit-based continuity
13
22
103
Integration
39
66
315
Communication
43
73
347
Whole-person
orientation
21
37
174
Health promotion
45
77
366
X Measurement
DOCTOR-PATIENT INTERACTIONS
X Proximity
Interpersonal
treatment
41
71
337
Patient trust
36
61
290
X Number
reliability (αMD)
of score to the cutpoint
of cutpoints in the reporting framework
Risk of Misclassification at Varying Distances from the Benchmark
Certainty and Uncertainty in Classification
and Varying in Measurement Reliability (αMD )
MD Mean Score
Distance from
Benchmark
(Points)
1
2
3
4
5
6
7
8
9
10
Probability of Misclassification at Varying Thresholds of
MD-Level Reliability
αMD=.70
αMD=.80
αMD=.90
38.0
27.1
18.0
11.1
6.3
3.3
1.6
0.7
0.3
0.1
34.5
21.2
11.5
5.5
2.3
0.8
0.3
<0.001
<0.001
<0.001
27.4
11.5
3.6
0.8
0.1
<0.001
<0.001
<0.001
<0.001
<0.001
Comparison with a Single Benchmark
Significantly below
Significantly above
6.3
αMD=0.7
4.9
αMD=0.8
3.26
αMD=0.9
0
65
100
50th p’tile
= area of uncertainty
Certainty and Uncertainty in Classification
Cutpoints at 10th & 90th Percentile
Bottom Tier
6.3
Middle Tier
6.3
Top Tier
Substantially Below Average
Average
Substantially Above Average
αMD=0.7
4.9
MEASURE RELIABILITY (αMD)
4.9
αMD=0.8
3.26
0.9
50
0.01
0
50
0.01
0
0.8
50
0.6
0
50
0.5
0
0.7
50
2.4
0
50
2.4
0
3.26
αMD=0.9
0
0
53
10th p’tile
76
90th p’tile
100
52.9
10th ptile
64.6
76.3
50th ptile
90th ptile
88.0
100
= area of uncertainty
Dana Gelb Safran, Sc.D.
The Health Institute, Tufts-New England Medical Center,
Boston, MA
Phone: 617-636-8611
dsafran@tufts-nemc.org
2
Variability Among Physicians (Communication)
___________________________________________________________________________
100
Below Average
Average
97.5
Substantially
Above Average
Above Average
MD Mean Score, %
Substantially
Below Average
MEASURE RELIABILITY (αMD)
0.9
50
19.7
0.8
50
0.7
50
3.3
50
2.2
0
50
17.6
3.2
50
0
0
28.5
11.1 50
8.8
0.4
50
27.0
11.2
50
0.4
0
33.0
17.3
14.7
2.0
50
32.0
17.4
50
2.3
0
50
95
92.5
90
85
80
77.5
0.6
50
36.4
22.5
50
19.9
4.7
50
0.5
50
38.7
27.7
50
25.2
8.7
50
0
52.9
58.5
64.6
70.8
10th ptile
25th ptile
50th ptile
75th ptile
35.4
38.3
22.8
50
5.4
0.1
27.3
50
9.7
0.4
75
0
100
76.3
30
40
50
60
Variability Among Physicians within Sites
(Communication)
100
100
95
90
Site and MD Mean Score
Group Mean Score, %
20
Number of Doctors
Variability Across Practice Sites (Communication)
85
80
75
70
65
60
Eastern Region
Central Region
25th-75th percentile
range of group scores
95
90
85
80
75
70
65
60
55
50
Western Region
Site A-1
25th-75th percentile
range of site scores
Group Mean score
Allocation of Explainable Variance:
Doctor- Patient Interactions
Site A-2
Site Mean score
Site A-3
25th-75th percentile
range of MD scores
Site A-4
MD Mean score
Allocation of Explainable Variance:
Organizational/Structural Features of Care
100
100
80
62
74
60
77
70
84
40
Doctor
60
Network
40
H
ea
lth
ru
st
36
23
Doctor
Site
45
56
16
8
Organizational
Access
Visit-based
Continuity
77
Network
Plan
20
0
Integration
nt
t
ea
tm
en
t
tr
pr
om
ot
io
n
or
ie
nt
at
io
n
pe
rs
on
16
In
te
rp
er
so
na
l
W
22
29
39
Pa
tie
25
ho
le
-
C
om
m
un
ic
at
io
n
38
80
Site
Plan
20
0
10
90th ptile
Dana Gelb Safran, Sc.D.
The Health Institute, Tufts-New England Medical Center,
Boston, MA
Phone: 617-636-8611
dsafran@tufts-nemc.org
3
Summary and Implications
Summary and Implications (cont’d)
X
Feasibility of obtaining highly reliable measures of patients’
experiences with individual physicians and practices has been
demonstrated.
With a 3-level reporting framework, risk of misclassification
is low – except at the boundaries, where risk is high
irrespective of measurement reliability.
X
The merits and value of moving quality measurement beyond
health plans and network organizations is clear.
X
Individual physicians and practice sites accounted for the
majority of system-related variance on all measures.
X
X
Within sites, variability among physicians was substantial.
By adding these aspects of care to our nation’s portfolio of
quality measures, we may reverse declines in interpersonal
quality of care.
X
With sample sizes of 45 patients per physician, most surveybased measures achieved physician-level reliability of .7-.85.
X
Certainty and Uncertainty in Classification
Multiple Cutpoints
Bottom Tier
6.3
2nd Tier
6.3
6.3
4.9
4.9
4.9
3.26
3.26
3.26
3rd Tier
Top Tier
αMD=0.7
αMD=0.8
αMD=0.9
0
Dana Gelb Safran, Sc.D.
The Health Institute, Tufts-New England Medical Center,
Boston, MA
53
65
10th p’tile
50th p’tile
76
100
90th p’tile
= area of uncertainty
Phone: 617-636-8611
dsafran@tufts-nemc.org
4
Download