Measuring and Reporting Patients’ Experiences with Their Doctors Process, Politics and Public

advertisement
Measuring and Reporting
Patients’ Experiences with
Their Doctors
Process, Politics and Public
Reports in Massachusetts
Melinda Karp
MHQP Director of Programs
June 26, 2006
Today’s Objectives
 Provide brief background on MHQP as important
context for measurement and reporting efforts
 Describe evolution of MHQP agenda for
measuring and reporting patient experiences—
key methods questions in moving from research
to large scale implementation
 Describe stakeholder perspectives and decision
points around key reporting issues
Stakeholders at the MHQP “Table”
• Provider Organizations
– MA Hospital Association
– MA Medical Society
– 2 MHQP Physician Council
representatives
• Government Agencies
– MA EOHHS
– CMS Region 1
• Employers
– Analogue Devices
• Health Plans
– Blue Cross Blue Shield of
Massachusetts
– Fallon Community Health Plan
– Harvard Pilgrim Health Care
– Health New England
– Tufts Health Plan
• Consumers
– Exec. Director Health Care for All
– Exec. Director NE Serve
• Academic
– Harris Berman, MD, Board Chair
The Evolution of MHQP’s Patient
Experience Measurement Agenda
2002-2003
Demonstration project in partnership with
The Health Institute (Funded by
Commonwealth and RWJF)
2004-2005
Development of viable business model for
implementing statewide patient experience
survey
2005-2006
Fielding and reporting of statewide survey
“1st Generation” Questions: Moving
MD-Level Measurement into Practice
• Is there enough performance variability to justify measurement?
• What sample size is needed for highly reliable estimate of
patients’ experiences with a physician?
• How much of the measurement variance is accounted for by
physicians as opposed to other elements of the system (practice
site, network organization, plan)?
• What is the risk of misclassification under varying reporting
frameworks?
Sample Size Requirements for Varying
Physician-Level Reliability Thresholds
Allocation of Explainable Variance:
Doctor-Patient Interactions
100
80
62
60
Doctor
74
77
70
84
Site
Network
40
20
Plan
38
25
22
29
16
0
Source: Safran et al. JGIM 2006.
Allocation of Explainable Variance:
Organizational/Structural Features of Care
100
80
39
36
23
Doctor
60
Site
40
45
56
77
Network
Plan
20
16
0
Organizational
Access
8
Visit-based
Continuity
Integration
Source: Safran et al. JGIM 2006.
Risk of Misclassification
Substantially Below Average
Average
Substantially Above Average
MEASURE RELIABILITY (MD)
0
0.9
50
0.01
0
50
0.01
0
0.8
50
0.6
0
50
0.5
0
0.7
50
2.4
0
50
2.4
0
64.6
76.3
50th ptile
90th ptile
52.9
10th ptile
88.0
100
Source: Safran et al. JGIM 2006; 21:13-21.
Risk of Misclassification
Substantially
Below Average
Below Average
Average
Substantially
Above Average
Above Average
MEASURE RELIABILITY (MD)
0
0.9
50
19.7
0.8
50
0.7
3.3
50
2.2
0
50
17.6
3.2
50
0
0
28.5
11.1 50
8.8
0.4
50
27.0
11.2
50
0.4
0
50
33.0
17.3
50
14.7
2.0
50
32.0
17.4
50
2.3
0
0.6
50
36.4
22.5
50
19.9
4.7
50
35.4
22.8
50
5.4
0.1
0.5
50
38.7
27.7
50
25.2
8.7
50
27.3
50
9.7
0.4
52.9
58.5
64.6
70.8
10th ptile
25th ptile
50th ptile
75th ptile
38.3
76.3
100
90th ptile
Source: Safran et al. JGIM 2006; 21:13-21.
MHQP 2005 Statewide Survey
• Physician-level survey format
• Site-level sampling to support site-level reporting
• Estimated samples required to achieve > 0.70 site-level reliability
Site Reliability Chart: Integration
Sample sizes needed to achieve site
reliability for integration domain.
Practice
Size
(Number of
Doctors)
3
4
5
6
7
8
A
Site-Level Reliability
B
C
D
> 0.7
0.5 – 0.69 0.34 – 0.49
< 0.34
38
48
57
65
72
79
16-37
20-47
24-56
28-64
31-71
33-78
8-15
10-19
12-23
14-27
16-30
17-32
7
9
11
13
15
16
Setting the Stage for Public Reporting:
Key Issues for Physicians
• What measures get reported
• How measures get reported
Percent of Sites with A-Level Reliability by
Measure and Survey-Type
Adult
PCP
%
Pediatric
%
Communication
98
97
Knowledge of patient
91
86
Health Promotion
46
97
Integration of care
79
61
Access
99
100
Visit-based continuity
100
100
Office Staff
95
99
Clinical Team
37
86
Willingness To Recommend
62
59
MD – Patient Interactions
Organizational/Structural Features of
Care
Framework for Public Reporting
Integration of Care
78.7
80.5
½
79.6
15th ptile
84.3
86.1
87.9
½
½
85.2
50th ptile
89.7
88. 8
85th ptile
Summary and Implications
• With sufficient sample sizes, data obtained using C/G CAHPS
approach yields data with MD- and site-level reliability >0.70
• For site-level reliability, number of MDs per site influences
required sample sizes
• Risk of misclassification can be held to <5% with by
– Limiting number of performance categories
– Creating buffer (“zone of uncertainty”) around performance
cutpoints
• Trade-offs are likely around data quality standards (e.g.,
acceptable “risk”) vs. data completeness
The Continuing Evolution…
2006-2007
• Engagement around QI activities
– Participation in Commonwealth Fund grant to study highest
performing practices
– Grant proposal to Physician Foundation to develop and pilot
integrated clinical-patient experience QI curriculum
• Determining achievable benchmarks
• Fielding of Specialist Care Survey in 2006/2007
• Repeat Primary Care Survey in 2007
For more information …
Melinda Karp, Director of Programs
mkarp@mhqp.org
617-972-9056
www.mhqp.org
Download