“Computers can land people on Mars, why can`t they get them to

advertisement
“Computers can land people on Mars, why can’t they get them to work in a
hospital?” Implementation of an Electronic Patient Record System in a UK
Hospital.
M.R. Jones
Judge Institute of Management Studies
University of Cambridge
Trumpington Street
Cambridge CB2 1AG
UK
Email: m.jones@jims.cam.ac.uk
Tel:
+44 (0)1223 338188/339700
Fax:
+44 (0)1223 339701
Abstract
Ambitious government targets for the use of Information Technology in the UK National
Health Service sit alongside a history of notable project failures. The decision by a UK
hospital to install an advanced, integrated Electronic Patient Record system therefore faced
competing demands and expectations. Describing the process of implementation of this
system over more than 3 years, this paper suggests that its simple categorisation as either a
success or failure is problematic. Rather, the differing viewpoints that lead some clinicians to
express “disappointment” with its performance, while others described its features as
“tremendous” and managers suggested that the system had become “taken for granted” are
explored. A number of broader phenomena relating to the organisational processes
surrounding information systems implementation are also identified.
1. Introduction
The UK National Health Service (NHS) has been criticised for its low level of expenditure on
information technology relative to other economic sectors and government departments [1],
and the UK government has set ambitious targets for NHS computerisation. Its most recent
initiative “Information for Health” [2], for example, includes calls for the establishment of a
lifelong electronic health record for all people in the country, to which end all acute hospitals
are expected to have implemented systems that will provide “round-the-clock on-line access
to patient records and information about best clinical practice, for all NHS clinicians” by
2005.
The record of the NHS in IT implementation, however, has been, to say the least, somewhat
chequered. In particular, three high-profile failures [3, 4] have attracted public attention and
that of Government bodies (the National Audit Office and Public Accounts Committee)
responsible for the proper expenditure of public funds. The London Ambulance Service
despatch system was scrapped within 48 hours of going-live; the Wessex Health Authority
regional information system was cancelled after more than £20mn had been spent; and the
national Hospital Information System pilot projects cost more than £32mn, but achieved less
than 8% of their projected savings after more than a decade of development, requiring one
site to purchase a stopgap system to enable it to continue operation.
Facing such competing demands and experiences, IT implementation in the NHS is thus a
more than usually sensitive area. While there have been a number of post mortems on the
more notorious project failures, there is a relative scarcity of detailed contemporaneous
studies of projects that could provide insight on the pressures shaping implementation in
practice. This paper presents a case study of the implementation of an Electronic Patient
Record (EPR) system in a UK hospital in the period 1998-2001. The account is based on the
findings of an evaluation exercise [5], involving two rounds of semi-structured interviews
with a cross section of hospital staff associated with the EPR project (including senior and
junior clinicians, nurses, managers and IT staff) before and after system implementation.
2. The Electronic Patient Record System at Blue Hospital
2.1 Background to the Case
The UK National Health Service (the NHS) is a state-funded system providing care “free at
the point of delivery”. In consequence, levels of service demand are high and resources
constrained. In the past 20 years its status has been politically-sensitive as successive
governments have promoted private sector management. Although stopping short of outright
privatisation, the NHS has been subject to a series of “reforms”, including: the introduction of
a cadre of professional managers; administrative restructuring to separate the “purchasers” of
health services from the units providing them; and the encouragement of private financing of
capital investments (in order to avoid public sector borrowing limits). In addition to their
direct effects, the frequent changes in policy priorities have created a climate of constant
reorganisation and uncertainty.
2.2 The EPR project
The decision to implement an electronic patient record system at Blue1, a 600-bed teaching
hospital with 2000 staff and an annual budget of £150mn, was originally taken in 1993.
Having recently been created from the merger of two previous hospitals and with a state-ofthe-art new building, there was a feeling that the fragmented departmental IT systems, a
number of which were becoming seriously out-dated, should be replaced with an integrated
system. A request for proposals was issued, but an intervening administrative reorganisation,
meant that it was not until December 1996 that a Private Finance Initiative contract was
finally signed.
A team of managers and clinicians was involved in choosing the supplier. Due to a perceived
lack of commercial British systems of suitable sophistication, the search was extended to the
USA and a system used in several leading US teaching hospitals was eventually selected. An
attraction of this system was seen to be that it was clinician-, rather than administration-,
1
A pseudonym
oriented. In particular it included an expert system module for clinical decision support (so
that “a consultant can be confident that a Junior Doctor will do right thing at 2am” as a senior
manager put it). Clinical guidelines would also be provided on the hospital Intranet.
Facilities management was to be outsourced.
The system was seen as providing Blue with an advanced, hospital-wide electronic patient
record that would increase efficiency, provide data for research and form the basis for a
future district-wide (and eventually national?) health record. Its expert system, guidelines
and integrated care pathways would support clinical governance (the subject of another
government initiative in 1998 aimed at “making individuals accountable for setting,
maintaining and monitoring performance standards”) and improvements in clinical practice.
Perhaps more significantly, it would enable “internal savings” to cover its costs (estimated at
£8.4mn over 8 years for hardware, software & facilities management, and £4mn internal costs
for development and training). With “Information for Health” requiring all hospitals to adopt
EPR systems, Blue would be at the forefront of NHS practice and would provide a reference
site for the US suppliers’ entry to the UK market.
Given the difficulty in creating an integrated system piecemeal, it was decided to go for a
“big-bang” implementation, with the whole hospital switching over to the new system on one
day. This was originally set for June 1998, but, due to delays in delivery of the software,
especially around “anglicising” it for UK conditions, it proved necessary to postpone go-live
twice, and the system was eventually implemented in February 1999. In preparation, 1800
staff were trained on the new system in 8 weeks, including more than 300 senior and 200
junior clinicians. As the hospital’s Chief Executive put it, “after 3 February 1999 there were
no paper records”.
While it had been anticipated that go-live would be difficult, and a 24-hour support service
was put in place by the IT department for the first week, it was clearly a traumatic period for
all involved. An IT manager described it as “the worst experience of my 27 years in the
NHS”. Another manager described it as “a nightmare … the worst time of my working life”.
The effect on everyday working practices was also dramatic. “It was very slow to start with.
We had queues out [of the door]” reported one nurse, while a pharmacist observed that, “the
situation after go live was absolutely terrible. We usually close at 6pm, but had to stay to
9pm to sort things out”. A senior clinician suggested that, “the hospital was close to
collapse”.
Many of the problems eased after the first few months as staff became more familiar with the
system, and a year later a senior IT manager could note that: “the system is in and settled in,
the software is working reasonably well, all intended areas are in, all expected users are using
it, the hardware is working fine and users have broadly accepted the system”. It is now
“taken for granted”, another senior manager claimed. Clinicians also commented that the
availability of the system was “impressive” and that being able to look at past results was
“tremendous”. Specific benefits were also identified. For example its use in pre-admission
screening was said to have reduced the number of operations cancelled because the patient
was not suitable/fit for surgery from 4.2% to 1.8%. The use of the system’s nursing care
plans was also considered as improving patient care.
Not all users were so positive, however. A number of senior clinicians described the system
as “a disappointment”, one suggesting that it was “not the system those involved early on
thought they were getting” rating it “3 on a scale of 1-10”. As one manager observed, “we
can say it is in, but there is no enthusiasm, except in a few pockets”, while a senior clinician
commented, “we’re familiar with the face, but haven’t grown to love it”. From a technical
standpoint, the interface was considered “slow”, “cumbersome” and “unintuitive”. In
particular, there were complaints about the number of keystrokes required to carry our even
simple tasks and the need to repeatedly enter the same data. The system was described as
“grey, ugly, DOS-based” and users compared it unfavourably with their home PCs and with
their expectations. “You wouldn’t tolerate the amount of repetition and user interface on a
computer at home, so why won’t it work on an £8mn system?” a senior clinician commented.
“Computers can land people on Mars, why can’t they get them to work in a hospital?”
another noted.
Given that the system was operational and in constant use throughout the hospital, it would
be difficult to consider it a failure. At the same time, the dissatisfaction of some clinicians
suggests that neither was it a clear success. It would therefore seem desirable to explore in
more detail the possible reasons for the divergence of opinion.
3. Case Analysis
3.1 Differing Perspectives
A first reason may reflect differing interest. Thus it was recognised that the IT department
had an incentive to see the EPR system as a success. Some clinicians claimed that this
distorted their views “the [EPR] team are always very positive … sweep things under the
carpet”. The personal reputation of senior managers was also seen to be at stake: in their
decision to purchase the system, in managing its implementation and in ensuring that benefits
were achieved to cover its costs. Clinicians, on the other hand, were seen as focusing on
problems. “Doctors bank what they have got and focus on two or three things going wrong”
a senior manager commented. Despite purchasing a nominally clinician-oriented system and
seeking to involve them at all stages, the EPR project was also seen as having promoted an
“us and them” attitude between clinicians and management and the IT department. “EPR is
not a care element, its admin”, a senior doctor observed. “It still comes across as an IT
project regardless of Steering Groups and Site Activation Teams”, noted another. Rather
than enabling clinicians to see IT as serving their needs, the EPR system appeared to have
entrenched divisions.
There were also differences between medical staff in their attitudes. One of the most striking
was the contrast between users for whom the EPR had replaced an ageing legacy system and
those for whom it had replaced an effective departmental specialist system. The legacy
system users were generally positive: “it’s an inevitable improvement over the old system”.
Those who had lost their successful specialist systems, however, were vociferous in their
criticisms, describing the system as “a lot of a disaster”, “a huge white elephant”.
Another characteristic contributing to divergence in users’ views was the throughput of their
department. Clinicians in low throughput departments commented that “reliability is good”
and that it was “very helpful” to have patient demographics and results available at any
terminal. One clinician in a high throughput specialty, on the other hand, where the “longwinded” interface had significantly slowed consultations, was described as “apoplectic”, and
others had instituted a “50% reduction in clinic [capacity] because of the time getting the
computer to work”
Attitudes also appeared to relate to status. A senior clinician, for example, explained his
reluctance to use the system because “doctors can’t type, only some nurses can, so the patient
thinks you are an idiot”. Senior clinicians also felt that, despite the efforts of the IT
department, “they failed to involve senior medical staff” and that training had not
acknowledged their colleagues’ sensitivities - “they should have got oldest, grumpiest senior
clinicians and paid them at private rate on a Sunday to try the system out”. Perceptions of
insufficient attention to these status concerns was important, because, although only a small
proportion of users, senior clinicians could have a significant influence on attitudes towards
the system, both with junior colleagues and through their involvement in major hospital
bodies.
Junior clinicians were felt to have fewer problems with the EPR system. “Junior staff are
more sympathetic … by and large it worked for them in 2-3 months” a senior clinician
commented, although this was seen to depend on their prior experience (as was generally
confirmed by a survey of doctors in training). They could also be expected to cover for their
senior colleagues. “Junior doctors earned their salaries” at go-live, as one manager noted.
Finally, clinicians and managers argued that “nursing and clerical staff will do as they are
advised”, “nurses will just get on with it”. Nurses did not disagree. “Nurses will cope with
change”, one observed. For them the EPR system was “alright”, “all the information is in
there, in one place” and staff were “generally happy”. The less choice people had about
using the system, therefore, the less they appeared to criticise it. Not that this necessarily
meant that they had no complaints, but that they felt that they would be expected to “get on
with it”, whatever the difficulties they faced.
3.2 Broader Influences
While it is possible to gain some insights on the divergence of views on the system in terms
of different interests, experiences and expectations in relation to its use, the case may also be
seen as illustrating a number of broader phenomena relating to the organisational processes
surrounding information systems implementation.
3.2.1 Universal standards?
An assumption of a unified EPR system, like that of other large infrastructural IS such as
Enterprise Resource Planning Systems [6], is that sufficiently standard processes may be
identified across organisations such that they can be adapted to a common model, or the
common model customised to local variation at reasonable cost. The Blue case may be seen
to illustrate this assumption at two levels. Firstly, with respect to a common healthcare
model between the US and UK, and secondly, regarding the needs of different hospital
specialties.
As was noted earlier, the implementation of Blue’s EPR system was delayed due to
unanticipated difficulties in anglicising the US software. A senior clinician involved in the
system selection commented, “when we visited [sites in the US], they were positive … we
didn’t twig to the difference”. This was seen as coming down to the very different principles
of US and UK healthcare: the one focused on monitoring costs of treatment and the other on
numbers treated. It was not that cost control was irrelevant in the UK, but the need to tie all
activities to specific chargeable “spells” and “episodes” was largely absent. Indeed some
common UK practices, such as the forward ordering of tests, because resource constraints
mean that they cannot be carried out immediately, were particularly difficult to accommodate
in the US system’s model. US doctors were also considered to have fewer patients and more
administrative support, meaning that the inconveniences of the system were not so visible to
them. In the UK, in contrast, a senior clinician observed, “the burden falls on people who are
very expensive as terminal operators [and this is] not a proper use of their time”.
The assumption of common data processes across all hospital departments also created
problems. Apart from the differences in throughput already mentioned, some departments
claimed that their distinctive requirements could not be accommodated by a common system.
“There is nowhere in the country where [my specialism] is a good part of a hospital-wide
[ERP system]”, a senior clinician argued. The only solution, it was argued, would be to
provide a specific specialist module with an interface to the common system. “Sadly, both
my colleagues would put up with the upheaval [rather than continue with the EPR system]”,
it was claimed.
3.2.2 Changing workplace relationships?
Although described as clinician-oriented, the EPR system could also be seen as an
opportunity for change in workplace relationships that were not necessarily wholly
advantageous to clinicians. For example the introduction of clinical guidelines, expert
systems for decision support, and integrated care pathways may be seen as providing external
influence on previously autonomous work practices. More subtle effects were felt to result
from the way in which the system organised order communications, making the requesting of
unusual tests more difficult, and facilitating standard procedures. A senior clinician
commented that the EPR system “leads to cheaper, dumber medicine” and that consequently
“some colleagues feel they are being controlled”.
In addition to the clinical governance aspects of the project that had been specifically
identified as one of the reasons for its adoption, and for which there appeared to be general
acceptance, the EPR system may be seen as “informating” [7] the work of clinicians,
providing new mechanisms for surveillance of their work. An important principle of the EPR
system, for example, heavily emphasised in training, was that doctors were responsible for
everything that was done with their usernames. Since these allowed access to all patient
records (except for those of the HIV/GUM clinic) it was explained that an audit could be
carried out, if any breaches of patient privacy were suspected. On a more practical level, a
permanent record of all diagnoses, tests, and treatments made by a doctor was now visible to
colleagues and management. This was seen to have advantages in tracking the treatment
process, but was also recognised as providing traceability should errors occur or complaints
be made. Of course, this had been possible before with paper records, but the electronic
record was seen to be considerably more legible, comprehensive, detailed and accessible.
Although junior doctors were quick to pick up on this surveillance capability during training,
it was not identified as a major worry by clinicians generally. This lack of concern may have
reflected scepticism about the quality of the electronic record. For example, a senior clinician
claimed that the EPR system was “not useful for audit purposes, and if the hospital uses it for
audit they will get unreliable results”. One reason for this was that, as several clinicians
admitted, the usability problems meant that they tended to enter the minimum data possible.
There was also felt to be a lot of errors in the system due to users’ inexperience, and the
difficulty in making corrections. As a result, a senior clinician claimed, there was “very little
clinical information in [the EPR system]”. The same clinician, however, also lamented that
there was “huge amount of useful information there, but its locked up”. This was due to a
more general problem of lack of programmers, because of the shortage of funds and
recruitment and retention difficulties, to write the code needed to extract data from the data
warehouse. This affected not just access to research data, which had been one of the original
justifications for the system, but also management information that would have allowed the
effectiveness of the system to be assessed. Having invested heavily in the purchase and
implementation of an information system, the hospital lacked the resources to make best use
of it (or even to know whether it was doing so).
3.2.3 The wider context
While the EPR implementation at Blue may be seen to reflect specific local conditions, it also
needs to be understood within the wider context of the NHS. Thus, in a resource-constrained
health system, funding associated with national policy initiatives may significantly shape
local practice. Continuing commitment to a unified, national service would also seem likely
to encourage local efforts to pursue national priorities. For example, the initial PFI proposal
may be seen as matching the prevailing policy climate. That, by luck or judgement, it could
also later be presented as supporting government initiatives on clinical governance and
“Information for Health” was probably helpful, too, in sustaining project momentum.
The national context, however, was not just influential in providing legitimacy for the project,
but also offered incentives for its successful completion. For example, installing one of the
most advanced EPR systems in the country would demonstrate the hospital’s status as a
national leader, and bring prestige and recognition to the individuals involved. Conscious of
the NHS’s less than glorious record with IT projects, there may also have been a desire to
show that it was possible to get things right (for once!). Certainly, in the efforts devoted to
user involvement and training there was an awareness of lessons from previous experiences.
That, as was noted earlier in the comments about it “still coming across as an IT project”, this
was not finally sufficient to ensure clinicians’ ‘ownership’ of the system, or dispel their
“cynicism” did not appear to be for want of trying on the part of the IT team.
3.2.4 Decision-making process
Another way of understanding the project’s momentum may be in terms of decision
escalation [8]. That is, that after a certain point, a project may acquire an aura of inevitability
and irreversibility that may make it very difficult to alter course. Thus, as an IT manager
noted, “The [hospital] has made a huge investment, we can’t drop it now”, while another
manager commented “its too expensive to write off early”. Moreover, as has already been
noted, it was not just the financial sunk costs that were at stake, but also personal and
institutional reputations. Thus, as critical clinicians, aware of the fallout from past NHS IT
project failures, observed, “nobody would be prepared to scrap it, because it would go to
Public Accounts Committee” and “after Wessex, the National Audit Office is on the alert”.
A sense of irreversibility about the EPR project may also be interpreted positively, however,
as indicating that the system was indeed becoming “taken for granted”. Support for this may
be suggested by a senior clinician’s assessment that, the hospital had undergone a “typical
change process of initial reluctance, then acceptance” and was now “reaching the dependence
stage”. Looking forward, other clinicians also felt that, despite the difficulties, it had been
right to go ahead with the EPR system. “My gut feeling is that this sort of system will be in
every hospital eventually … it’s the right thing to do” one noted. “Given the way technology
is going, this is the way to go. People are being dragged kicking and screaming, due to
under-funding, but we have to press on”, commented another.
4. Concluding Remarks
While users’ concerns undoubtedly highlighted important problems with the system, to which
considerable resources have had to continue to be devoted to resolving, it may not be so
surprising that implementing an information system, especially all at once, across a whole
hospital, introducing IT into almost every staff member’s work practices, provoked some
strong reactions. That some senior clinicians, it is reported, are now envisaging the
achievement of a totally-paperless EPR would also suggest that, despite initial reservations,
the system is becoming accepted, in some quarters at least. At the same time, it would seem
important to recognise the significance of users’ concerns, the possible reasons for their
raising them, and the process of change that has been undergone. In this, as this paper has
sought to illustrate, an understanding of the complex interplay of both social (for example,
status concerns) and technical (for example, standardised infrastructures) aspects of
information systems may make an important contribution.
References
1. Computer Weekly
Efficiency in the health service. Computer Weekly, 31 May 2001 (accessed via
http://www.cw360.com).
2. Burns F.
Information for health. Leeds: NHS Executive, 1998.
3. Beynon-Davies, P.
‘Information systems ‘failure’: the case of the London Ambulance Service’s
Computer Aided Despatch project’. European Journal of Information Systems 1995;
4: 171-184.
4. National Audit Office
The Hospital Information Support Systems Initiative. HC332. London: HMSO, 1996.
5. Jones, M.R.
An interpretive method for the formative evaluation of an Electronic Patient Record
System. In: Proceedings of 8th European Conference on IT Evaluation, Oxford,
September 16-17, 2001.
6. Hanseth, O & Braa, K.
Technology as Traitor. SAP infrastructures in Global Organizations. In: Hirscheim, R.
Newman, M., and DeGross, J. I. (eds.) Proceedings of 19th Annual International
Conference on Information Systems, Helsinki, Finland, December 13-16, p.188-196.
7. Zuboff, S.
In the Age of the Smart Machine. Oxford: Heinemann Educational, 1988.
8. Drummond, H.
Escalation in Decision Making. Oxford: Oxford University Press, 1996.
Download