Evaluating accreditation - International Journal for Quality in Health

International Journal for Quality in Health Care 2003; Volume 15, Number 6: pp. 455–456
10.1093/intqhc/mzg092
Editorial
Evaluating accreditation
The uptake or market share of individual accreditation
programmes at national level, and their impact on the
health system,
The consistency, compatibility and validity of programmes as a basis for comparing health care providers,
such as across Europe, and
The costs and beneWts of individual programmes to
health care providers.
‘Considering the amount of time and money spent on organisational assessment, and the signiWcance of the issue to
governments, it is surprising that there is no research into the
cost-effectiveness of these schemes’ [3]. Most established programmes have been subjected to internal [4,5] or external evaluation of their impact [6–8], but few of these evaluations have
used comparable methods to permit synthesis. There is ample
evidence that hospitals rapidly increase compliance with the
published standards in the months prior to external assessment,
and improve organizational processes but there is less evidence
that this brings beneWts in terms of clinical process and outcome
(K. Timmons, 2001, S. Whittaker, 2001, unpublished data).
The ALPHA principles and standards of the International
Society for Quality in Health Care have codiWed the experience
of many established accreditation programmes into empirical
guidance on how health care standards may be developed,
deWned and independently assessed [9]. Another ISQua working
group has set out to identify and evidence the environmental
factors (such as culture, incentives, regulation) which are
associated with success or failure of the general model of
accreditation in a given country [10]. But the process–outcome
link, which neatly legitimizes the measurement of clinical
practice against evidence-based guidelines as a proxy for clinical
outcome, has not been established between institutional
accreditation and performance.
Several factors make accreditation more difWcult to evaluate
than a clinical technology:
The ‘endpoints’ of accreditation are hard to deWne, and
vary according to the expectations of users and observers;
there are many potential ‘products’ (e.g. institutional
control, organizational development, professional regulation, Wnancial allocation, public accountability),
Individual programmes vary around a common model
e.g. with respect to scope, standards, assessment, packaging
and pricing; they are not a homogeneous population,
‘Accreditation’ is not a single technology but a cluster of
activities which interact to produce documented processes
and organizational changes; but process–outcome links
may be demonstrated for component interventions, and
summated as a proxy for overall impact, and
Case-control studies of institutional accreditation
require a large, supportive but uncontaminated universe
to sample, compare and monitor over many months;
few countries offer this opportunity.
The relative priorities of national accreditation programmes are inXuenced by local social, political, economic
and historical factors. In developed countries the common
emphasis is on evaluation and improvement of safety, clinical effectiveness, consumer information, staff development,
purchaser intelligence and accountability—and reduction in
variation. In developing countries, the emphasis is on establishing basic facilities and information, and improving
access in an environment where there may be no established culture of professional responsibility, and very
limited resources available for stafWng, equipment and
buildings.
Some programmes in North America accredit entire
health networks, and are beginning to shift from individual
health care towards population health; some recent governmental programmes in Europe address public health priorities (such as cardiac health, cancer services) by assessing
International Journal for Quality in Health Care 15(6)
© International Society for Quality in Health Care and Oxford University Press 2003; all rights reserved
455
Downloaded from by guest on September 30, 2016
A global study for the World Health Organization in 2000
identiWed 36 nationwide health care accreditation programmes
[1]. Starting with the Joint Commission on Accreditation of
Healthcare Organizations (JCAHO) in the United States in
1951, the number of programmes around the world has doubled every Wve years since 1990. Development has been especially marked in Europe which now accounts for half of all
programmes.
The traditional model of voluntary, independent accreditation
is being rapidly adapted towards a government sponsored or even
statutory tool for control and public accountability. Many countries, especially in Eastern Europe, are beginning to use accreditation as an extension of statutory licensing for institutions.
In this context of growth and innovation, the National
Agency for Healthcare Accreditation and Evaluation
(ANAES) in France, described by Daucourt and Michel in
their paper ‘Results of the Wrst 100 accreditation procedures
in France’ [2] in this issue of the International Journal for Quality
in Health Care, should provide useful lessons to accreditors
and governments world-wide. It is expansively ambitious and
innovative, but also governmentally slow and expensive; in
2001 its accreditation programme accounted for three-quarters
of the 16.5 million combined expenditure of ten national
programmes in Europe. Other governments, at least those
which would contemplate a statutory programme, should
watch Paris for signs of value of the investment, in terms of
impact on the public/private health system.
The problem is that, in an increasingly evidence-based
world, we have aggregated very little hard data about:
Editorial: C. D. Shaw
local performance of preventive to tertiary services against
national service frameworks. In such programmes, measures
should include the application of evidence-based medicine
(process) and population health gain (outcome) but many
health determinants (e.g. housing, education, poverty)
remain outside the scope of health care accreditation
programmes.
Given these uncertainties and variations, any objective
measures of individual accreditation programmes should be
welcomed in the public domain. Daucourt and Michel [2]
have used the summary reports of 100 hospitals, published
on the Internet, to quantify the common concerns of
ANAES surveys (which are very similar to those in other
countries). They refer in passing to how long the programme
has taken to develop, to long turnaround times for reports
(nine months) and to inconsistencies between teams in
report writing which may account for some of the variations
in the number of recommendations and reservations. These
are realities faced by any programme, especially new ones.
The study does not bridge the research chasm but it may
encourage others to examine new and established programmes systematically in order to document and share
empirical evidence on the impact of external assessment
against standards.
456
1. International Society for Quality in Health Care. Global review of
initiatives to improve quality in health care. Geneva: WHO (in press).
2. Daucourt V, Michel P. Results of the Wrst 100 accreditation procedures in France. Int J Qual Health Care 2003; 15: 463–471.
3. Øvretveit J. Quality Evaluation and Indicator Comparison in
Health Care. Int J Health Plann Manage 2001; 16: 229–241.
4. Clinical Standards Board for Scotland. Developing the review
process: an internal evaluation report. Edinburgh: CSBS, 2002.
5. Shaw CD, Collins CD. Health service accreditation: report of
a pilot programme for community hospitals. BMJ 1995; 310:
781–784.
6. Bukonda N, Abdallah H, Tembo J, Jay K. Setting up a national
hospital accreditation program: the Zambian experience.
Bethesda: Quality Assurance Project, 2000.
7. Duckett SJ, Coombs EM. The impact of an evaluation of hospital accreditation. Health Policy Q 1982; 2: 199–208.
8. Scrivens E. Accreditation: protecting the professional or the
consumer? Oxford: OUP, 1995.
9. Heidemann E. Moving to global standards for accreditation
processes. Int J Qual Health Care 2000; 12: 227–230.
10. International Society for Quality in Health Care. Toolkit for
accreditation programmes: Some issues in the design and redesign
of external assessment and improvement systems. Melbourne:
ISQua (in press).
Downloaded from by guest on September 30, 2016
Charles D. Shaw
Roedean House,
Brighton, Sussex,
United Kingdom
References