A049

advertisement
Service evaluation and optimization: worthwhile endeavor or zombie-like?
Jonathan Karnon, Glenis Crane, Andrew Partington, Clarabelle Pham
School of Population Health, University of Adelaide
Introduction
There is growing awareness of unwarranted variation in the provision of
healthcare, based on observed differences in the process of care (e.g. adherence
to clinical guidelines), costs of care, and patient outcomes.[1,2] Despite
increasing empirical evidence on the dollar values associated with unwarranted
variation,[3] the amount spent on quality improvement remains very small in
relation to funds allocated to new technologies. In Australia, spending by the
Pharmaceutical Benefits Scheme rose by $500 million in 2010/11. The annual
budget for the Australian Commission on Safety and Quality in Health Care is less
than $15 million. In the UK, NHS Improving Quality (NHS IQ) commenced in April
2013, with the aim of implementing effective improvement programmes and
building improvement capacity. The NHS IQ budget appears to be around £66
million[4], some way short of the £400 million that is spent on new
pharmaceuticals each year.
The title reference to zombies concerns a statement made by Jonathan
Oberlander and colleagues that “the appeal for transparency in medical decisionmaking is like a zombie, an idea that refuses to die despite its limited utility”.
Drawing on the experience of the Oregon Experiment, the authors propose a
“political paradox of rationing”, which implies that public discourse regarding
the rationing of healthcare is unlikely to produce actionable responses of the
magnitude required to control budgetary pressures.
Analogous to the desire to create a sustainable healthcare system via consensus
driven resource allocation, quality improvement is intuitively a good thing – who
does not want to improve quality? However, quality improvement involves the
detailed analysis of underperforming areas of clinical practice, and the
implementation of changes to existing clinical pathways, which may not
necessarily be embraced by all stakeholders. Compared to the complex process
of change management, it is a lot easier to approve the funding of a new
technology.
However, there has been an apparent increase in the application of quality
improvement in healthcare: a PubMed search for papers with “quality
improvement” or “service improvement” in the title shows that such papers
increased from 155 papers per year in 2003, to 542 papers in 2013. Given
increased interest in quality improvement, a low funding base, and the potential
complexity of the change management process, it seems reasonable to propose
that appropriate methods be used to:


identify priority areas for the application of quality improvement,
assess the value of alternative forms of quality improvement.
This paper reports on three applied quality improvement initiatives, which used
different methods to address different quality improvement issues. The objective
1
is to draw lessons with respect to the application, and evaluation of quality
improvement of existing healthcare services.
Case study 1: A cross-hospital analysis of the cost-effectiveness of services
for Emergency Department presentations with chest pain
The aim of this case study was to identify benchmark performers with respect to
costs and outcomes, supplemented with comparative analyses of processes at
the comparator hospitals to aid healthcare providers in prioritising and targeting
areas for health services improvement.
Detailed clinical and administrative data (including pathology test results) were
linked for 7,950 patients presenting with chest pain at one of four emergency
departments (EDs) over a one-year period. The data were extracted from a
Statewide administrative dataset and multi-hospital clinical data system.
Multiple regression, adjusting for a wide range of clinical and sociodemographic
covariates, was applied to identify variation in costs, outcomes, and processes
across hospitals. Hospital interaction terms were tested to identify patient subgroups that might be driving the variation observed at an aggregate hospital
level.
Statistically significant, casemix-adjusted differences were observed in mean
inpatient costs (up to $672 per admitted patient), and 30 day and 12 month
cardiovascular or mortality event rates (odds ratios up to 2.42 and 1.64,
respectively) across providers. Larger mean differences between providers were
observed for patients presenting to the ED out-of-hours and with existing
circulatory conditions with respect to inpatient costs (up to $1,513 per patient),
and in younger patients with respect outcomes (odds ratios up to 3.28).
Casemix adjusted analyses of process indicators identified significant differences
in admission rates, particularly in older female patients, and patients with
existing circulatory conditions or a positive pathology (troponin) test. Variation
in the selection of patients for invasive diagnosis (angiography) was significantly
larger at weekends. Differences across providers for time to admission, and for
inpatient length of stay were greatest for patients with a positive troponin test.
The presented analyses identify clinically and statistically significant differences
in casemix adjusted costs and outcomes across alternative providers, for patients
presenting at an ED with chest pain. These results indicate the potential value of
engaging stakeholders to identify and implement service improvements at one
or more of the non-benchmark hospitals. The analyses of variation in processes
provide complementary evidence to support the validity of the reported
differences in costs and outcomes. The process data may also usefully inform
subsequent stakeholder engagement through the identification of patient groups
and process components for which variation across providers is greatest, and at
which improvement efforts might be focused.
Such comparative analyses of costs, outcomes, and process across healthcare
providers can inform the potential value of improving existing services (e.g.
relative to allocating scarce resource to new technologies), and prioritise and
guide efforts to service improvement. The initial research plan described a
process of data collection, analysis, and feedback to clinical and management
2
stakeholders with a view to acting on the analytic outputs. The research had buyin from the Statewide Cardiology Network.
However, the central limitation of this applied study concerned the time
required to access and assemble the final dataset. The project commenced in
January 2012, with the aim of finalizing the linked dataset for eligible patients
presenting up to July 2011 by mid-2012. These timelines were clearly not ideal
(i.e. a gap of one year to extract and link the data), but they were realistic in
relation to the processing of the administrative data. In the event, delays at
various stages of the extraction and linkage process meant the final dataset was
not available until late 2013. Hence, the research became an academic exercise
with little scope for translation to policy and practice (even if the predicted
timelines had been met, the relevance of the research to policy and practice may
still have been constrained by the fixed timelines for the processing of the
administrative data). However, analysis of the currently available data may
inform two intermediate objectives:


the trade-off between contemporary and comprehensive data with
respect to the relevance of the data
interaction with stakeholders to demonstrate the potential value of
comparative performance data as a policy lever
The latter will inform broader and deeper stakeholder engagement, to provide a
common understanding of the purpose and potential role of the research
methods to inform improved health service performance. Engagement with
managers and policymakers is important to promote research translation, but
may also facilitate improved data access via direct influence on State health
department priorities.
Case study 2: Evaluating pre-operative assessment clinics for high-risk
surgical patients
In response to rising rates of post-operative complications, a new physician-led
clinic for the pre-operative assessment of high-risk elective surgery patients was
set-up at the Queen Elizabeth Hospital and the Royal Adelaide Hospital, in South
Australia in 2008. Unlike traditional pre-operative assessment, undertaken by
anesthetists in the period close to the intended date of surgery, the aim of the
new clinics was to see patients with significant medical co-morbidities around
the time at which the decision to operate is being made. Thus, the physician
could advise on suitability for surgery, and manage patients prior to surgery to
optimize any existing conditions, and so reduce the likelihood of cancellation and
post-operative complications.
The objective of the quality improvement project was to evaluate the service to
identify sub-groups of the surgical population for whom the clinic was costeffective, to inform a formal referral process to the clinic (referral is currently on
an ad hoc basis), as well as to define the optimal capacity of the clinic.
The study design was a retrospective cohort study comparing costs and a range
of outcomes (e.g. cancellations, length of stay, post-operative complications, and
readmissions and mortality at 12 months) using a comprehensive linked dataset
on 943 elective surgery patients. Propensity score-weighted regression analyses
3
were undertaken to estimate the clinic effect for each outcome. The propensity
score reflected the conditional probability of receiving a referral to the clinic,
given adjustment for a wide range of clinical and administrative data, including
pathology data and patient responses to a detailed pre-operative patient
questionnaire.
As presented in Table 2, overall, the propensity score-weighted results suggested
that patients referred to the clinic had a longer length of stay, more
complications, and higher 12-month mortality. Looking at the results by
intended surgical procedure, clinic effects were particularly poor for patients
receiving surgery for cancer, for whom waiting times before surgery were
shortest.
Clearly, these were unexpected results that required further investigation.
Following discussion with the clinical steering group, the likelihood of
unobserved confounding remained high despite the range of the objective and
patient-report data analysed. In particular, differences between clinic and
control patients were suspected with respect to frailty, a conditions that is hard
to define, but is marked by lessened resilience and negative changes in
physiology, making the frail individual weak, slow and vulnerable to stressor
events, such as surgery. These suspicions were supported by prospectively
collected survey data on a more recent cohort of patients listed for surgery,
which identified significant differences between clinic and control patients in the
attribute levels of the health domains of the EQ5D.
In addition to prospective data collection, qualitative interviews with surgeons
are currently being undertaken to understand their perceptions of high-risk
surgical patients, the value of the clinic, and alternative approaches to handling
high-risk surgical patients. Preliminary results indicate that surgeons value the
clinic as a complimentary and accessible service that is clinically intuitive, i.e.
expected to improve the management and outcomes of their patients. Referral
decisions appear to be based on a combination of the objective presence of
comorbidities and clinical instinct (referred to in one instance as a subjective
measure of ‘crookness’).
In setting up the new model of care, surgeons were informed of the availability of
the clinic, and provided with limited guidance regarding referrals to the clinic,
such that the referral criteria for pre-operative assessment was left to the clinical
judgment of individual surgeons. Could clinic performance be improved by the
implementation of a more formal set of referral criteria?
Here, we enter the territory of the zombies. Referrals from one doctor to another
are most often based on the clinical judgment of the referring doctors, for
example, most GP referrals to specialists are not based on a protocol, but on
clinical judgment. How good are doctors at referring, what is their sensitivity and
specificity? What are the costs of inappropriate referrals and non-referrals?
In addition to wasted healthcare resources and increased waiting times for clinic
appointments, might inappropriately referred patients be more likely to miss
appropriate referrals in the future, with attendant consequences for downstream
costs and outcomes (as for inappropriate non-referrals)?
4
On the flip side, what are the effects of implementing formal referral criteria?
Resource costs of dissemination, monitoring, and feedback? Adverse effects on
doctor morale? Reduced costs of inappropriate referrals and non-referrals?
Opportunities for substitution within the health workforce?
To mix metaphors, should we open the Pandora’s box of clinical autonomy with
respect to referrals, or let sleeping referrers lie?
The impetus for the evaluation of the high-risk clinic was the belief that surgeons
were not referring all patients who could benefit from pre-operative assessment.
In the event, further research is required to demonstrate the value of the clinic,
and there is a reasonable likelihood that the clinic has minimal effect on patients
undergoing elective surgery with short waiting times. At this stage, it would
appear to be negligent to keep calm and carry on.
With hindsight, a prospective evaluation alongside the introduction of the highrisk clinics, with adequate piloting to minimize the possibility of confounding
(resorting to randomization if necessary), may have informed a more efficient
clinic. Alternatively, the surgeons may have reacted poorly to the imposition of a
formal referral pathway to the clinic, which may have limited the uptake of the
facility. Perhaps systems change in the surgical outpatients department could
reduce the burden on the surgeons. An involved process might be required to
develop an evidence-based risk tool to identify patients for whom referral to
clinic is likely to provide cost-effective benefits. As an intervention to be
evaluated, the tool might be completed by patients as they wait for their
appointment, with the surgeon signing off referral to the clinic on the basis of
patients’ risk scores.
Such a development and evaluation process requires time and resources.
However, the cost implications of implementing and maintaining a new hospital
clinic are not trivial. The inconclusive evaluation of the high-risk clinic provides a
rationale for the prospective evaluation of ‘service improvements’ that have nontrivial costs. In addition, if a formal evaluation concludes that a new service is
cost-effective, the evidence can support the dissemination of that service to other
health service providers.
If a formal evaluation concludes that a service improvement is not cost-effective,
there may be some difficulties in ‘disinvesting’ from the service at the hospital(s)
in which the ‘improvement’ was evaluated, though the evidence may inform
iterative improvements to the service. In addition, the assembled evidence can
prevent the inappropriate dissemination of the service to other providers.
Case study 3: Improving hospital-based glaucoma services at the Royal
Adelaide Hospital, South Australia
Glaucoma is an eye disease in which the optic nerve at the back of the eye is
slowly eroded, usually due to increased intraocular pressure inside the eye.
Patients with glaucoma, or at high risk of glaucoma are often referred to an
ophthalmologist as the management pathway is complex with many alternative
therapeutic approaches, and accompanying side effects.
5
As a chronic condition, referred patients require monitoring until death or
blindness (at which point patients are referred to an alternative ophthalmic
service). In the period prior to study commencement, demand for glaucoma
services at the study hospital had increased such that patients were not able
book repeat appointments within the timeframe specified by the treating
ophthalmologist. The objective of the quality improvement project was to
develop a simulation model to test the cost-effectiveness of alternative
approaches to the delivery and organization of glaucoma services.
A complex simulation model represented the parallel processes of disease
progression and pathways of care for patients treated at the study hospital over
a five-year time horizon. For each patient, vision was modeled as deteriorating
over time, as a function of the extent to which intraocular pressure is controlled
by treatment. At each clinic appointment, patients’ glaucoma status is reviewed
and clinical decisions are made regarding treatment (i.e. to change treatment if
pressure is uncontrolled, or treatment side effects are too severe) and the
intended time to patients’ next clinic visit.
The model of current service delivery was subject to a detailed model calibration
process to ensure that the model predicted the observed progression of the
patient cohort over the five-year time horizon. The calibrated model was then
used to assess the costs and benefits of a range of alternative service scenarios,
including changes to the frequency of follow-up at alternative disease stages (e.g.
subsequent to disease progression, and for patients with stable glaucoma), the
length of the booking cycle length, and the earlier use of more invasive
interventions that may require less frequent surveillance (e.g. laser therapy
instead of medical therapy).
The results showed that extending the review time for stable patients resulted in
a small loss of QALYs due to increased disease progression, whilst varying
follow-up times following a change in treatment had little impact on either costs
or QALY gains. However, by increasing the likelihood that more urgent patients
will return, and at a more appropriate time, the extension of the booking cycle to
6 months improved outcomes at virtually no additional cost. An alternative
management algorithm, involving laser therapy prior to pharmaceutical
intervention, resulted in QALY gains at lower costs (i.e. it was a dominant
strategy). Sensitivity analyses indicated a very high level of certainty that the 6
month booking cycle, with the first line use of laser therapy would be the most
cost-effective scenario.
As presented to the lead ophthalmologist, the model demonstrated the impact of
trying to provide care that is in line with recommendations, but without having
the required resources. In the absence of additional resources it emphasized the
need to adapt the services currently provided. To that end, a decision was made
to establish an additional clinic that would be targeted at patients with
established stable disease (low progression risk patients). The clinic would be
run by ophthalmic nurses, who could refer onto an ophthalmologist in the event
of observed disease progression.
The lessons from this study included that simulation is a potentially powerful
tool to demonstrate unanticipated consequences of existing service pathways.
That the implemented response was not one of the scenarios tested in the model
6
highlights the need for closer integration of the service stakeholders in the
model development process.
From a cost perspective, the development of a complex simulation model to
evaluate service provision at a single hospital is unlikely to be a cost-effective
use of quality improvement resources. To provide value for money, such
modeling based research must be adaptable to multiple institutions to generate
value through generalizability.
Discussion
Based on growing evidence regarding the effects of unwarranted variation in
healthcare delivery on patient outcomes and costs, there are many untapped
areas with significant potential for improving the healthcare services. This paper
has described the conduct, and impact of three case studies, using alternative
methodologies, but with a common aim of improving the provision of healthcare
in a hospital setting.
The first – chest pain – case study described a methodology for identifying areas
in which individual hospitals have significant scope to improve service delivery
with respect to costs and patient outcomes, as well as identifying elements of
clinical pathways on which improvement activities might focus.
In Australia, cross-hospital comparisons focus on variation in costs, as
incentivized by a casemix funding system. In England, there is increasing interest
in comparing outcomes (e.g. the patient reported outcome measures (PROMs)
program in England), though evidence to date shows little effect of the public
reporting of the PROMs data [5]. As suggested by Appleby et al [6], more formal
use of such data may be required to motivate improvement in non-benchmark
providers, e.g. mandating a response to evidence of unwarranted variation that
results in poorer performance.
More contemporary, complete, and comprehensive data, to inform robust
casemix adjustment and robust measures of costs, outcomes, and processes, will
improve the impact and acceptance of comparative performance data as a policy
lever for improved health service performance. At that point, the zombies issue
will need to be tackled – is the exertion of external pressure to improve
institutional performance likely to overcome the political economy of clinical
autonomy?
The second – high risk clinic – case study illustrates the potential value of the
robust evaluation of improvement initiatives in the hospital setting. Whilst not
proposing that every improvement activity undergo a formal development and
evaluation process – routine auditing of the effects of low cost, low risk
initiatives should be sufficient to confirm intended outcomes [7] – for higher cost
initiatives, one of the following decisions should be reached:
a) proceed with the initiative on the basis of existing evidence, e.g. evidence
generated from the implementation of the initiative by an alternative
health service provider,
b) proceed on the basis of a formal development and evaluation process, or
c) allocate the required resources to alternative activities.
7
If option b) is selected, systems should be in place to support engagement
between health service staff and health service researchers, who can advise on,
and support the development and evaluation process in a timely manner. As
recommended recently [8], the embedding of improvement science in
professional training is important. Collaboration between motivated clinicians
with on-the-ground experience of the barriers and facilitators to improved
service delivery, and independent external researchers with relevant experience
and expertise will further enhance the improvement process.
In most cases, the initial development and evaluation cost fall on the innovating
health service provider, to whom such costs are substantial, and may not be
recouped in the form of improved efficiency at a single provider.
Thus, the case is made for a dedicated and substantial national fund to support
the formal development, evaluation, and dissemination of service improvement
or innovation initiatives. In England, a five year £220 million innovation fund
was established in 2009, though the fund appears to have been discontinued. In
its place, National Health Service Improving Quality (NHS IQ) was launched in
2013, as the “driving force for improvement across the NHS”. Detail on how
improvement will be driven is currently lacking, though it does not appear to
involve the direct funding of service improvement initiatives.
In Australia, the National Health and Medical Research Centre (NHMRC) has
established the Partnerships for Better Health initiative to fund research
partnership between “decision makers, policy makers, managers, clinicians and
researchers”. This funding stream is targeted at health service providers and
policymakers, but there has been limited uptake to date. This may be partly due
to the requirement for health service partners to contribute monetary funds and
in-kind support to the research, and the lag time between the grant application
and funding outcome. Other contributory factors may include limited
engagement between health service providers and health service researchers,
and a corresponding lack of awareness of the potential benefits of integrating
health service provision and research to inform better use of the resources
allocated to healthcare.
Historically, health services research to improve the use and application of
health technologies has received very little funding, relative to the resources
allocated to the discovery of new health technologies. Despite recent increases in
the availability of funds to support health services research, the culture of quality
improvement as an in-house reactive device may be holding us back from
realising the full potential of health services research to improve healthcare
services and patient outcomes.
The third - glaucoma services - case study was able to influence practice, perhaps
because the research was focused on a specific service (one outpatient clinic),
and the researchers had a close working relationship with the consultant leading
the clinic. We met with the consultant on a regular basis, and he was able to
make unilateral decisions regarding changes to the existing service. The
application to a single outpatient clinic at a single institution could be expanded
into a service, with researchers working with ophthalmologists at other
institutions, adapting the developed simulation model to inform service
improvements specific to each glaucoma service. The key element is the
8
connection with stakeholders who can make and implement decisions to change
a service.
Conclusions
In the age of electronic medical records, access to robust routine clinical data to
inform research-driven service improvement has improved, though there is still
scope for further improvement. In the meantime, there is a need to shift the
culture of service improvement to a point at which collaboration between
practicing clinicians, health service managers, and researchers is the norm for
identifying, and acting on areas of service delivery with significant potential for
improvement.
References
1. Runciman WB, Hunt TD, Hannaford NA, Hibbert PD, Westbrook JI, Coiera
EW, et al, CareTrack: assessing the appropriateness of health care
delivery in Australia, Med J Aust 2012; 197 (2): 100-105
2. Kennedy PJ, Leathley CM, Hughes CF. Clinical practice variation. Med J
Aust. 2010;193(8 Suppl):S97-9
3. Karnon J, Caffrey O, Pham C, Grieve R, Ben-Tovim D, Hakendorf P, Crotty
M. Applying Risk Adjusted Cost-Effectiveness (RAC-E) Analysis to
Hospitals: Estimating the Costs and Consequences of Variation in Clinical
Practice. Health Economics 2012; 22(6): 631-642
4. http://www.england.nhs.uk/wp-content/uploads/2013/04/ppf-13141516.pdf
5. Varagunam et al, Impact on hospital performance of introducing routine
patient reported outcome measures in surgery, J Health Serv Res Policy
April 2014 vol. 19 no. 2 77-84
6. Appleby J, Raleigh V, Frosini F, Bevan G, Gao H, Lyscom T, Variations in
health care. The good, the bad and the inexplicable, The King’s Fund 2011,
ISBN: 978 1 85717 614 8
7. Warcel et al, Service improvement system to enhance the safety of
patients admitted on long term warfarin, BMJ QI Reports 2014
8. National Advisory Group on the Safety of Patients in England, A promise
to learn – a commitment to act. Improving the Safety of Patients in
England, August 2013,
https://www.gov.uk/government/uploads/system/uploads/attachment_
data/file/226703/Berwick_Report.pdf
9
Download