Rewarding Provider Performance: Key Concepts, Available Evidence, Special Situations, and Future Directions

advertisement
Rewarding Provider Performance: Key
Concepts, Available Evidence, Special
Situations, and Future Directions
R. Adams Dudley, MD, MBA
Institute for Health Policy Studies
University of California, San Francisco
Support: Agency for Healthcare Research and Quality,
California Healthcare Foundation, Robert Wood Johnson
Foundation
Outline of Talk
• Review of obstacles to using incentives
(using the example of public reporting)
• Summary of available data
• Addressing the tough decisions
• If we have time, consider the value of
outcomes reports
Dudley 2005
2
Project Overview




Goals:
• Describe employer hospital report cards
• Explore what issues determine success
Qualitative study
• 11 communities
• 37 semi-structured interviews with hospital and
employer coalition representatives
Coding and analysis using NVivo software
See Mehrotra, A, et al. Health Affairs 22(2):60.
Dudley 2005
3
11 Communities
Seattle
Maine
S Central
Buffalo
Wisconsin Detroit
Cleveland
Indianapolis
Memphis
E Tennessee
N Alabama
Orlando
Dudley 2005
4
Summary of Report Cards
 Only 3 report cards begun before 1998
 Majority use mortality and LOS outcomes, patient
surveys also common
 Majority use billing data
 4/11 communities no public release
Dudley 2005
5
4 Issues Determining Success
1. Ambiguity of goals
2. Uncertainty on how to measure quality
3. Lack of consensus on how to use data
4. Relationships between local stakeholders
Dudley 2005
6
Ambiguity of Goals
Hospitals Skeptical of
Employer Goals
Hospitals don’t trust employers, suspect their primary interest
is still cost
“An organization that has been a negotiator of cost, first and
foremost, that then declares it’s now focused on quality, is
a hard sell.”
“Ultimately, you’re going to contract with me or not contract
with me on the basis of cost. Wholly. End of story.”
Dudley 2005
7
Uncertainty on How to Measure Quality
Process vs. Outcome Debate

Clinicians: Process measures more useful
•

“We should have moved from outcomes to process
measures. Process measures are much more useful to
hospitals who want to improve.”
Employers: Outcomes better, process measures
unnecessary
•
“People want longer lasting batteries. Duracell doesn’t
stand there with their hands on their hips and say, ‘Tell us
how to make longer-lasting batteries.’ That’s the job of
Duracell.”
Dudley 2005
8
Ambiguity ofon
Uncertainty
Goals
How to Measure Quality
The Case-mix Adjustment
Controversy
 Clinicians: Forever skeptical case mix adjustment is good
enough:
“[The case-mix adjustment] still explained less than 30 percent of the
differences that we saw…”
 Employers: We cannot wait for perfect case-mix adjustment
“My usual answer to that is ‘OK, let’s make you guys in charge of perfect,
I’m in charge of progress. We have to move on with what we have today.
When you find perfect, come back, and we’ll change immediately.’ ”
Dudley 2005
9
Lack of Consensus on How to Use Quality Data
Low Level of Public Interest a
Positive Trend?
 Low levels of consumer interest, at least initially
 One interviewee felt slow growth is better:
“Food labeling is the right metaphor. You want some model which gets to one and a
half to three percent of the people to pay attention. This gives hospitals time to fix
their problems without horrible penalties.… But if they ignore it for five years all
of a sudden you’re looking at a three or four percent market share shift.”
Dudley 2005
10
Ambiguity of Goals
Relationships
Between Local Stakeholders
Market Factors
“Market power does not remain constant. Sometimes
purchasers are in the ascendancy and at other times,
providers are in the ascendancy, like when hospitals
consolidate. And that can vary from community to community
at a point in time, too.”
Dudley 2005
11
Key Elements of
an Incentive Program
• Measures acceptable to both clinicians and the
stakeholders creating the incentives
• Data available in a timely manner at reasonable
cost
• Reliable methods to collect and analyze the
data
• Incentives that matter to providers
Dudley 2005
12
CHART:
California Hospital Assessment
and Reporting Task Force
A collaboration between California
hospitals, clinicians, patients, health
plans, and purchasers
Supported by the
California HealthCare Foundation
Participants in CHART
• All the stakeholders:
– Hospitals: e.g., HASC, hospital systems, individual
hospitals
– Physicians: e.g., California Medical Association
– Consumers/Labor: e.g., Consumers Union/California
Labor Federation
– Employers: e.g., PBGH, CalPERS
– Health Plans: e.g., Blue Shield, Wellpoint, Kaiser
– Regulators: e.g., JCAHO, OSHPD, NQF
– Government Programs: CMS, MediCal
Dudley 2005
14
How CHART Might Play Out
Admin
data
Clinical
Measures
Report
to
Hospitals
OR
Specialized
clinical data
collection
Patient
Experience
and
Satisfaction
Measures
H-CAPHS
"Plus"
Scores
IT or Other
Structural
Measures
Surveys with
Audits
Data Aggregator Produces one set
of scores per
hospital
Report to
Health
Plans
and
Purchasers
Report
to
Public
Dudley 2005
15
CHART Measures
• For public reporting in 2005-6:
– JCAHO core measures for MI, CHF, pneumonia,
surgical infection from chart abstraction
– Maternity measures from administrative data
– Leapfrog data
– Mortality rates for MI, pneumonia, and CABG
Dudley 2005
16
CHART Measures
• For piloting in 2005-6:
– ICU processes (e.g., stress ulcer prophylaxis),
mortality, and LOS by chart abstraction
– ICU nosocomial infection rates by infection
control personnel
– Decubitus ulcer rates and falls by annual survey
Dudley 2005
17
Tough Decisions: General Ideas and
Our Experience in CHART
• Not because we’ve done it correctly in
CHART, but just as a basis for
discussion
Dudley 2005
18
Tough Decision #1:
Collaboration vs. Competition?
• Among health plans
• Among providers
• With legislators and regulators
Dudley 2005
19
Tough Decision #2:
Same Incentives for Everyone?
• Does it make sense to set up incentive
programs that are the same for every
provider?
– This would be the norm in other industries if
providers were your employees; unusual in many
other industries if you were contracting with
suppliers.
Dudley 2005
20
Tough Decision #2:
Same Incentives for Everyone?
• But providers differ in important ways
– Baseline performance/potential to become top
provider
– Preferred rewards (more patients vs. more $)
– Monopolies and safety net providers
• But do you want the complexity?
Dudley 2005
21
Tough Decision #3:
Encourage Investment?
• Much of the difficulty we face in starting public reporting or
P4P comes from the lack of flexible IT that can cheaply
generate performance data.
• Similarly, much QI is best achieved by creating new team
approaches to care.
• Should we explicitly pay for these changes, or make the value
of these investments an implicit factor in our incentive
programs?
• Can be achieved by pay-for-participation, for instance.
Dudley 2005
22
Tough Decision #4:
Moving Beyond HEDIS/JCAHO
• No other measure sets routinely collected and
audited as current cost of doing business
• If you want public reporting or P4P of new
measures, must balance data collection and
auditing costs vs. information gained
– Admin data involves less data collection cost, equal or more auditing
costs
– Chart abstraction much more expensive data collection, equal or less
auditing
Dudley 2005
23
Tough Decision #4:
Moving Beyond HEDIS/JCAHO
• If purchasers/policymakers drive the
introduction of new quality measurement
costs, who pays and how?
• So, who picks the measures?
Dudley 2005
24
Tough Decision #5: Use Only
National Measures or Local?
• Well this is easy, national, right?
• Hmmm. Have you ever tried this? Is there any
“there” there? Are there agreed upon, nonproprietary data definitions and benchmarks? Even
with the National Quality Forum?
• Maybe local initiatives should be leading national??
Dudley 2005
25
An Example of Collaboration:
C-Section Rates in CHART
• Initial measure: total C-section rate (NQF)
• Collaborate/advocate within CHART:
– Some OB-GYNs convinced the group to develop
an alternative: the C-section rate among
nulliparous women with singleton, vertex, term
(NSVT) presentations
• Collaborate with hospital:
– NSVT not traditionally coded: need to train
Medical Records personnel
Dudley 2005
26
Tough Decision #6:
Use Outcomes Data?
• Especially important issue as sample sizes get
small…that is, when you try to move from groups to
individual providers in “second generation”
incentive programs.
• If we can’t fix the sample size issue, we’ll be forced
to use general measures only (e.g., patient
experience measures).
Dudley 2005
27
Outcome Reports
Some providers are concerned about
random events causing variation in
reported outcomes that could:
• Ruin reputations (if there is public
reporting)
• Cause financial harm (if direct financial
incentives are based on outcomes)
Dudley 2005
28
An Analysis of MI Outcomes
and Hospital “Grades”
• From California hospital-level risk-adjusted MI
mortality data:
 Fairly consistent pattern over 8 years: 10% of hospitals
labeled “worse than expected”, 10% “better”, 80% “as
expected”
 Processes of care for MI worse among those with
higher mortality, better among those with lower
mortality
• From these data, calculate mortality rates for
“worse”, “better”, and “as expected” groups
Dudley 2005
29
Probability Distribution of Risk Adjusted Mortality Rate for Mean Hospital
in Each Sub-Group
Scenario #3: 200 patients per hospital;
trim points calculated using normal
distribution around population m ean, 2
tails, each with 2.5% of distribution
contained beyond trim points.
20.0%
18.0%
7.6%
16.6%
Probability Dist ribution Mortalit y Out come
16.0%
14.0%
Poor Quality Hospitals
Good Quality Hospitals
12.0%
Superior Quality
Hospitals
A ll Hospitals in Model
10.0%
Low Trim Point
8.0%
High Trim Point
Poor Hospital Mean
6.0%
Good Hospital Mean
4.0%
Superior Hospital
Mean
2.0%
8.6%
12.2%
17.1%
0.0%
0%
5%
10%
15%
20%
25%
30%
Ris k Adjus te d Mortality Rate
Dudley 2005
30
3 Groups of Hospitals with Repeated Measurements (3 Years)
Predictive Values
3 Year Star Scores
Scenario #3
80.0%
70.0%
Proportion Total Hospitals
60.0%
50.0%
Superior Quality Hospital
Expected Quality Hospital
Poor Quality Hospital
40.0%
30.0%
20.0%
10.0%
0.0%
3
4
5
6
7
Dudley
2005
Hospital Star
Score
8
9
31
Outcomes Reports and Random
Variation: Conclusions
• Random variation can have an important impact on
any single measurement
• Repeating measures reduces the impact of chance
• Provider performance is more likely to align along a
spectrum rather than lumped into two groups whose
outcomes are quite similar
• Providers on the superior end of the performance
spectrum will almost never be labeled poor
Dudley 2005
32
Conclusions
•
•
•
•
Many tough decisions ahead
Nonetheless, paralysis is undesirable
Collaborate on the choice of measures
Everyone frustrated with limited (JCAHO and
HEDIS) measures…need to figure out how to
fund collecting and auditing new measures
• Consider varying incentives across providers
Dudley 2005
33
Download