The Evolution of a Framework for Assessing

advertisement
The Evolution of a Framework for Assessing
Hospital Information Systems in South Africa
Vincent Shaw (a+b), Edoardo Jacucci (b), Jørn Braa (b)
(a) Health Information Systems Programme
School of Public Health
University of Western Cape
Cape Town, South Africa
(b) Department of Informatics
University of Oslo
PB 1080, Blindern
N-0316 Oslo, Norway
vshaw@wol.co.za, edoardo@ifi.uio.no, jbraa@ifi.uio.no
Abstract:
This paper provides an insight into the process of an ex-ante or exploratory assessment of Information
Systems (IS) in hospitals in developing countries. Based on a case study of an assessment process
conducted in 13 hospitals in the Province of Eastern Cape in South Africa, the paper creates two
contributions. Firstly it supports the claim that prior-to-implementation assessment of IS in developing
countries are of vital importance in order to reduce the risk of failure. Secondly, elaborating on the
findings from the case, it contributes to the ongoing epistemological discussion around positivist vs.
interpretivist approaches in IS evaluation. We submit that a balanced approach is necessary, where the
balance is determined by contingent factors of the context of work.
1. INTRODUCTION
This paper provides an insight into the process of an ex-ante or exploratory assessment of Information
Systems (IS) in hospitals in developing countries. Drawing on the existing body of literature on IS
evaluation, the paper positions itself inside the discussion around positivist vs. interpretivist evaluation
approaches (Hirschheim & Smithson 1999). The research presented here finds its motivation in the
acknowledgement of the relevance of conducting a proper exploratory assessment of the situation. This
phase is particularly important when preparing IS implementation projects in developing countries (Forster
& Cornford 1995).
The aim of the paper is thus to address the challenges and opportunities implicit in an assessment prior to
implementation and to elaborate the findings within the above mentioned ongoing epistemological
discussion in the IS evaluation community. The contribution of the paper is formed by a case-study based
insight into the importance of the assessment phase in support of Forster and Conford’s view, and by the
conclusion that a balance is needed between positivist and interpretivist approaches, and that this balance
depends on the contingent characteristics of the context of work.
The paper is structured as follows. First we will provide a review of the relevant literature in IS evaluation
in general and in the specific case of developing countries. Secondly, we will describe the methodological
approach. Thirdly, we will present the case. Then we will summarize our findings in the discussion section.
Finally, we will draw the conclusion of our research and suggest topics for further research.
2. LITERATURE REVIEW
In this section we will go through a review of literature on IS evaluation. In particular, we will position our
paper within the discussion around ‘exploratory’ or ‘ex-ante’ evaluation (Smithson & Tsiavos 2004).
Referring to an ongoing discussion on the future trends and directions of research in the IS evaluation field,
1
we will also raise some issues based on the case presented here which will be addressed in the discussion
section. We will then highlight the main challenges of IS projects in developing countries and the crucial
role of pre-implementation IS assessment in addressing these challenges.
2.1 IS Assessment and IS Evaluation
This paper investigates the nature and the role of a pre-implementation assessment phase of an information
system. Broadly speaking, an assessment of an information system falls into the more general theme of IS
evaluation. Smithson and Tsiavos define IS evaluation as “[…] an act of categorizing, classifying, and
reducing the world to a series of numbers – or constructed classifications – to enable this representation to
be compared and managed […], thus to be able to function as the basis for rational decision making”
(Smithson & Tsiavos 2004). By creating representations of the reality, the evaluation provides a “basic
feedback function” to managers and it constitutes a “fundamental component of the organizational learning
process”. It is hence “essential for problem diagnosis, planning and the reduction of uncertainty” (Smithson
& Hirschheim 1998). (This is also discussed in Angell & Smithson 1991; Symons & Walsham 1991;
Hawgood & Land 1988).
However, IS evaluation is a process which may interest the whole life-cycle of the system (Hirschheim &
Smithson 1999). It is thus worthwhile to differentiate, for example, the nature and role of an evaluation
prior to implementation from one after the implementation. In particular, this paper deals with IS
assessment as a form of ex ante or ‘exploratory’ evaluation. The purpose of a pre-implementation IS
assessment is to provide a picture of the past and present situation in order to inform future decisions. That
is, it is the “the construction of a possible future” by inscribing it into the present and future decisions of the
organizations (Smithson & Tsiavos 2004).
While the review presented above helps to understand the purpose of a pre-implementation evaluation or
assessment, it remains to be clarified ‘how’ the assessment can be done. It is here important to underline
that an evaluation of an IS is in fact an evaluation of a ‘social’ system (Walsham et al. 1990; Hirschheim &
Smithson 1999). This observation has practical implications at least in three directions:
(1) no matter how detailed and structured the set of criteria or categories of the evaluation, their
mere measurement and quantification can hardly provide an objective representation (Mason &
Swanson 1981);
(2) as social systems evolve over time and space, their evaluation at any point of time cannot have
a long term validity;
(3) social systems are also political systems which make the evaluation process itself a political
activity (Smithson & Hirschheim 1998; Wilson & Howcroft 2000).
These challenges have so far guided the research on IS evaluation to explore different epistemologies and
ontologies to investigate its nature. As a result, IS evaluation approaches stretch along a continuum ranging
from highly objective and rational approaches to very subjective and political ones. Accordingly, the first
type of approaches tends to build on positivist investigation, while the latter encounters and represents
reality through more phenomenological and interpretive explorations (Hirschheim & Smithson 1999).
This epistemological question is still open and is built on the evidence that the excessive concentration on
tools and techniques to objectively categorize, measure and quantify reality finds its limits when applied to
social systems. The validity of the positivist stance (which so far received most of the attention of the
research community) is hence questioned in favor of the introduction of a phenomenological one
(Hirschheim & Smithson 1999).
We wish here to position this paper within this discussion and raise some issues which we will address in
the discussion section. The case presented here provides a rich insight in the motivation and learning
process of a pre-implementation IS assessment. The evidence will show that an initially positivist approach
needed to be changed into a mixed positivist and phenomenological one which could integrate undisputable
hard facts with social, political, and behavioral aspects of the IS. We will reflect on this learning process
trying to contribute to the ongoing discussion with insights into real world practice and needs of an IS
2
assessment. Specifically we will try to understand which approach better serves the particular needs of an
assessment of health IS in developing countries.
In the next section we will review some of the literature related to IS evaluation and assessment in
developing countries.
2.2 IS Assessment in Developing Countries
IT/IS implementation projects in developing countries are often seen as integral parts of development
policies. Yet, the particular environmental and infrastructural conditions found in developing countries set
considerable challenges to the successful completion of such projects. There is an often lacking physical
infrastructure, deficient local human capacity, high turnover of staff, and poor socio-economic conditions.
Last but not least there are also different cultural backgrounds which need to be taken into consideration
(Bell & Wood-Harper 1990). As a result, any IS implementation must carefully take into account the social
environment in which it is pursued (Walsham et al. 1990; Wilson and Gouws 2003; Lippeveld et al. 2000;
Littlejohns et al. 2003).
In this setting, the evaluation of an IS can be seen as a way of reducing the uncertainty, acquiring local
knowledge, and thus increasing the likelihood of success of the implementation. Understanding as much as
possible before implementation is initiated, is important to ensure that implementation strategies are
appropriate and take into account the socio-economic realities. Should failure result, the effect of wasting
scarce resources carries a far greater cost than that of failure in a well resourced setting. Forster and
Cornford (1995) underline how “[…] evaluation of prior systems development work and of other projects
in the domain is thus vital to inform the developer and to permit enhanced planning and management
functions”.
While some authors do provide specific frameworks for conducting the evaluation in these settings (Forster
and Cornford 1995; LaFond & Fields 2003; Lippeveld et al. 2000), we will here not focus on those
specifically. Rather we will focus on the purpose and content of the evaluation and try to explore more in
depth and in detail ‘how’ it can increase the likelihood of success of health IS implementation in
developing countries.
3. METHODOLOGY
The research described in this paper falls within the interpretive research tradition described by Walsham
(1995). The research describes the development of a tool for assessing hospital information systems.
The case study was conducted in a period of four months and involved assessments of information systems
at 13 hospitals. It also draws on the experience of one of the authors (Shaw) in assessing hospital
information systems in two central hospitals in Malawi (this work spanned a nine month period and
involved about 10 weeks spent on site in these hospitals).
The data was collected directly observing or participating to the process. The process has been an iterative
one, utilising participatory research methods (small group discussions, focus group discussions, reflection
and analysis) to build on the experience of a wide variety of individuals (both information systems
consultants and health workers and managers) from differing backgrounds and professions.
Data sources include:
• Notes written by one of the authors (VS) as part of the process of conducting evaluations;
• Mini-disc recordings of meetings and discussions held during the evaluation process;
• Written reports following assessments, and notes, documents and copies of reports obtained
during the assessments;
• Observation of health workers in their daily activities;
• Photographs taken during the hospital assessments.
3
4. CASE STUDY
In this section we describe the context in which the case study is situated, and a brief overview of the
circumstances that led to the initiation of the project. The main focus however is on the development of an
assessment format and this is described in three sections – the first assessment format, the experiences of
applying the format, and how this influenced the need for a more open format, and the development of the
subsequent format. The end of the section summarises the learning from this process and suggests an
approach that might be adopted in future.
4.1 The context of the case study
The development of the assessment format took place in the Eastern Cape Province (ECP) of South Africa.
This is a predominantly rural province, with health inidices similar to those in other developing contexts
(the infant mortality rate (IMR) is around 61.2 deaths per 1000 live births (the highest in South Africa,
which has an average IMR of 45.4). In this province there is a 48% unemployment rate, only 31% of
people have access to electricity, and 24% of homes have water in the dwelling (SADHS, 1998).
Hospitals are graded according to the level of care that they provide. District Hospitals are the first level of
hospital care, and provide care to patients referred to them from the primary health care facilities. District
hospitals refer patients to regional hospitals, which are the second level of care provided in the system.
Regional hospitals generally employ specialists to provide more sophisticated services. Specialised
hospitals provide specialised care to a select group of patients (psychiatric care, tuberculosis patients are
examples of these types of hospitals). The development of the assessment format took place as part of a
twelve month project designed to improve access to information, and its use by managers in thirteen
hospitals– nine of these were regional hospitals, two were specialised hospitals, and two were district
hospitals.
4.2 Hospital information systems used in the ECP
Hospitals in this province have rudimentary information systems. None of the hospitals in this province use
electronic patient information systems. Patient records are paper based (patient files for records of
treatment and care provided), and the information system is also largely paper-based. Essentially the steps
in the process of generating information in these hospitals consist of the following:
1. Use of registers to record certain pieces of information from a patient interaction with health
service providers. This takes place at most points of service delivery. The service delivery points
are termed “reporting units” – being sources of raw data for the information system. The
information that is recorded in the register is usually information about the patient (age, sex,
residence, etc, as well clinical information like diagnosis, outcome and types of procedures
performed);
2. The generation of a monthly summary report from the register. This contains anonymous,
aggregated patient data (total numbers of patients seen, numbers of patients seen with certain
diagnoses, or in certain age groups), and is sent to the management level;
3. Collation of the information into a hospital report format. The report format contains raw data
collated from the reporting units, and aggregated to hospital level;
4. Submission of this information to the provincial office where it is entered into a database for
analysis and use.
There is little feedback, and very little use of information at either reporting unit level or management level.
The flow of information is generally unco-ordinated and often results in incomplete data at the management
level.
The project was designed to address these problems. It was envisaged that the paper based system of
collecting information from patient interactions would continue, but that the monthly summary report
would be entered into the database at a central point in the hospital. This would enable the raw data to be
processed at the hospital, and would enable the production of reports to reporting units as feedback), and to
management.
4
The project design included three phases (see table 1).
Phase
Phase 1
1.
2.
3.
4.
5.
Phase 2
Phase 3
6.
1.
2.
3.
1.
2.
Project activities
Assessment of the existing information systems in the hospitals;
Define an Essential Dataset (* see endnote) for the hospital services in the
province, and for each hospital;
Proposing a data flow policy, including the roles and responsibilities of
staff in the information processing cycle;
Refinement of data collection tools for the reporting units;
Customisation of the District Health Information System (HISP) software
to accommodate all the reporting units in the hospital – this with the aim of
enabling the hospital to collate and analyse its own data;
Developing a training plan
Installing the customised database in the hospitals;
Provide training to selected groups on the use of information;
Overseeing the ongoing customisation of the database.
Provide ongoing support for the development of the information system in
the hospitals;
Conduct an assessment of the implementation process to highlight lessons
learnt from the project
Table 1 Phases designed in the project
The case study description focuses on the assessment of the existing information system in the hospitals
(phase 1) and briefly draws on lessons learnt from the assessment of the implementation process (phase 3)
that are relevant to conducting an assessment at the outset of such a project.
4.3 The first assessment format
In preparation for the assessment, a few members of the team prepared an assessment format. Based on the
ultimate goal of the project (achieving the regular production of complete and reliable data reports from the
hospitals) the team focused on the three objectively essential factors that needed to be in place. Firstly, all
the necessary data (the ones defined in the National EDS) needed to be collected. Secondly, there had to be
some form or tool to collect them. Finally, there needed to be one or more persons appointed to take care of
the process and its outcome. The need of these three preconditions was reflected in the first version of the
assessment tool. In this first stage, the assessment process included a preliminary meeting with the
management. During the meeting the project was supposed to be introduced and a complete map of all
existing reporting units (that is wards where data is collected) was created. The next step was to meet with
the information officer, if there was one appointed in that hospital. Subsequently, the assessment team
would visit the reporting units one by one to assess if data from the EDS were collected and how. The
assessment tool allowed reporting this by providing a list of the data elements on the left and a space for
ticking on the right. The general assumption behind this assessment approach was that it had to be a hard
fact: either that data was collected or it was not.
The first pilot assessment was conducted at two regional hospitals.
4.4 Experiences during the pilot assessment:
After conducting the assessment, the project team met and discussed the usefulness of the tool and the
appropriateness of the approach. Overall, the team found the assessment tool limited. The remarks
underlined how the tool was too focussed on the reporting of data, and did not make allowance for
recording innovation, human interactions, and the impact of different processes to be recorded. It became
apparent that while assessing the hard fact of whether a certain data element was collected or not was
certainly relevant, even more important, for the success of the project, was to understand why it was not or
why sometimes a clear answer could not be given. Here are some examples of observations made by the
team regarding the situations they have been assessing:
5
•
•
•
Data collection was “too dependant on individuals….when this person is absent, data does not get
collected”.
Doctors and nurses had different needs for information, and often the needs of doctors were imposed
on those of nurse, without taking into consideration their needs (an example was cited where an
orthopaedic doctor had introduced a new data collection system to address his specific needs)
Requests for information came from different sources (district management, provincial management,
national management) and because there was no co-ordination of these requests, health care workers
were left to find ways of collecting information that often resulted in duplication (one ward submitted
three different formats of the same “mid-night census” form to different people) of data.
All these snapshots provided useful insights which could all together give a “flavour” or a “gut feeling” of
what was going on in the hospital. The assessment format at this stage did not allow for this rich
information to be reported. Hence a format was required which would provide both the possibility of
assessing more objective aspects of the existing information system and of describing the process behind
them through the documentation of the explorative understanding of the assessor.
In summary, the following comments were used as a basis for redesigning the assessment tool and
approach:
• It was more important to start from the perspective of “what data is being collected” by the reporting
units, than “what data should be collected by the reporting units”. This reflects a significant shift from
being focussed on the needs of the system (reporting according to the EDS) to rather focussed on what
is happening at the service delivery point. In many instances the reporting units were collecting most
of the data required in terms of the National EDS, or they could adapt their systems easily enough in
order to accommodate the requirement of the National EDS.
• The above point also reflected a shift in the teams’ thinking. The initial thinking was to ensure that the
correct data was collected – this reflected an implementation centred approach that was initially
adopted by the team. However, as the interactions occurred between the team and the health workers,
the team found itself naturally moving into the role of trying to understand why data was or was not
collected by the health workers, and how they used the data that they collected. This reflected a more
health worker centred approach – an approach that should be developed in order to understand the
health workers, so that the recommendations that resulted were appropriate to their needs. The team
realised that the assessment process was not an isolated step in the implementation process, but in itself
it contributed to the initiation of the implementation process as well as informing the implementation
process.
• Finally, during the assessment some interesting local innovations were discovered. For the success of
the project, building on and disseminating such innovation also among the other hospitals was seen as
highly relevant. The assessment had to allow for such things to be documented.
4.5 The second assessment format
Based on the experiences during the pilot assessment, a second format for assessing the information system
was developed. It was much more generic, and could accommodate peculiarities and innovations found in
the reporting units. It assessed two main aspects for each reporting unit:
• Data collected and the tools associated with the data collection process;
• Reporting process and the forms used for reports.
In order to allow assessors to add, when necessary, their interpretations of how the above was happening,
the new format accommodated the possibility to add comments to the two more objective aspects listed
above.
The revised format was applied in each of the remaining hospitals. The same process was followed in most
cases (introductory meeting with management, assessment of various reporting units in the hospital,
compilation of brief report and feedback to management).
As a final step, the assessment team prepared a report and discussed the situation assessed in the hospital
with the senior management. In these discussions, additional information from these groups was obtained
on:
6
•
•
•
the information received – usually confirmation of what had been found at reporting unit
level;
the actual use of information;
staffing issues related to the processing of information – roles and responsibilities of staff
(e.g. the use of information by supervisors).
4.6 Summary
In summary, we can see that the assessment team moved from having a rigid format for assessing an
information system that was focussed on information requirements at a national (and provincial) level, to
one which was more focussed on the tools used by reporting unit staff, and which allowed a more in depth
exploration of the reasons why they did what they did. In addition, discussions with supervisors, senior
managers, and information staff complemented the information gathered from reporting units to allow the
generation of a report. This report provided both a “live” representation of the situation in the hospital, and
a sense of direction to the forthcoming implementation and improvement. Table 2 shows the outline of the
report which was generated. The structure of the assessment report already reflects the lines of action of the
forthcoming implementation (AF refers to Assessment Format for data collection tools or reporting tools).
Report content
1. Data flow policy
2. Essential dataset
3. Steps in the information
cycle:
3a. Data collection
3b. Data collation and
analysis,
3c. Data processing and
presentation,
3d. Use of information
4. Computers and use of
computers
Source of information from
reporting unit
AF: Data Collection
AF: Data Reports
AF: Data Collection
AF: Data Collection
AF: Data Reports
AF: Data Reports
Visit to ward
Source of information other
Discussion Information Unit
Discussion supervisors, senior
managers.
Discussion supervisors, senior
managers, information unit.
Discussion supervisors, senior
managers, information unit.
Discussion supervisors, senior
managers, information unit.
AF: Computers
Table 2 Outline of the report generated after the assessment
It was concluded that the process of conducting the assessments is as important as the data obtained;
namely, the introductory discussions with senior management to explore the process that will be followed,
the discussions with staff in the reporting units, and the presentation of the report to management
afterwards. The experience gained during this assessment showed that the assessment phase is intricately
linked to the process of improving the system, and can both inform and contribute to the subsequent
implementation and improvement process.
5. DISCUSSION
In this section we will explore the relevance of an ex-ante assessment, and how it contributes to the
implementation process. We also explore the need to balance the assessment between positivist and
interpretivist styles.
5.1 Importance of Pre-implementation HIS Assessments
In the literature review we suggested that, implementation of information systems in developing countries
provides “considerable challenges” to their successful implementation. We suggested that through the
7
process of conducting an assessment, a deeper understanding of local knowledge is obtained, and this
assists in informing the implementation process. It also reduces “uncertainty” or the lack of knowledge
about a system. This was our experience in this case study. In the case we describe the importance of
getting alongside staff to understand what they do and why they do it that way. In doing this the team was
able to:
• Identify best practices that worked in the context of that hospital, and which might be able to
be shared and applied in other similar settings;
• Understand the dynamics around aspects of the information system that were not working, so
that the intervention could address some of the core reasons for their failure, rather than the
symptoms
In fact, often while the data to be collected and the processes around collecting the data can be decided
remotely, the successful implementation of the data collection processes requires a deep understanding of
the flow of patients through the wards, the positioning of staff at these points, and their ability to record
information. In addition, an understanding of the culture within the unit and their commitment or lack of it
to collect the information is helpful (e.g. does the supervisor look at the information, does she provide
feedback on the information). This emphasises the overwhelming feeling that the team experienced during
the assessment process, namely that the data and the systems to collect the data are a product of a complex
social system. They are an external manifestation of a complex system of interaction between humans
(health care workers and managers and their patients) and their environment.
Through conducting the assessment, we were able to gain a better understanding of the context in which the
information system operated, and this not only informed our implementation process, but also changed the
implementation methodology. We believe these changes were appropriate and contributed to a more
successful implementation process.
5.2 Balancing Positivism and Interpretivism in IS Assessments
In the literature review we provided some background to the different types of information system
assessments. We describe the continuum along which information system assessments are stretched, from
the positivist (often quantitative) to the more subjective and political interpretivist approaches. The case
study describes how the assessment approach was felt lacking when it focussed more on the positivist end
of the spectrum (the first version of the assessment tool), and how the format changed to accommodate a
richer understanding of the context in which the assessment was conducted.
We find in this case study in fact a need to balance both aspects of the spectrum in a single process. Both
positivist and interpretivist approaches help to gain a deeper understanding of the system than either could
do alone. Thus the quantification of the data collected, its completeness, and accuracy, the numbers of staff
available to perform certain functions, and the size of the hospital, are useful pieces of information.
However, this alone is insufficient. More information is needed about the functioning and systems behind
these numbers. The need for this information is an inherent characteristic of human interaction, and flows
naturally through the process of interaction.
If the positivist approach is utilised exclusively, one would record that ward X does not collect information
Y. However, when exploring this in depth, one finds that there are many reasons why this information is
not readily available, for example:
• The senior staff (in the case of the hospital, the doctors) may not regard this as important and
so do not support the use of resources to collect that information;
• It may appear to be “not collected” because the person responsible is absent that day;
• Health workers (particularly nurses, and ward clerks) are subject to direct demands from
many different people (patients, their immediate supervisors, their colleagues with whom
they work – other member of the health care team who may be senior to them) and indirect
demands from distant managers at district, regional or national levels, and the public in
general. What information gets collected is a complex reflection of these demands, and their
own needs for information.
8
All these explanations will turn out to be relevant piece of information when conducting the
implementation of the improvement.
In conclusion, the evolution of the assessment format reflected:
• A change towards a more interpretivist approach (although the format included a more
positivist assessment of the data collected) in that the tool allowed a more open ended
assessment of what was happening, thus an approach that allowed greater exploration of the
context in which the information was collected;
• A change towards a more “health worker centred approach” - looking at what tools were in
use, and therefore what information is collected from one which concentrated on what was or
was not collected, without exploring the reasons behind the action.
We see therefore that the assessment process should be sensitive to, and reflect the context in which the
information system is developed. This process itself can result in change and can have an impact in
improving the information system – this by virtue of the interaction that takes place through the assessment
process. This of course is then intricately linked to the recommendations for implementation, which are
more likely to be appropriate and to succeed.
6. CONCLUSIONS AND FURTHER RESEARCH
This paper provides a rich description about the process of developing an ex-ante or exploratory assessment
format for information systems in hospitals in developing countries. We provide insights into the
importance of conducting such an assessment in a hospital context, and argue the need for a balance
between positivist and interpretivist approaches. We reflect on these aspects, and this is both the
contribution of this paper and its limitation. We would suggest that areas for further research might include
an exploration of ways in which an information systems assessment can be structured so that it contributes
significantly to the successful implementation of information systems projects in developing countries, and
the factors that would shift assessment formats along the continuum between the positivist and
interpretivist approaches.
ENDNOTES
(*) An Essential Data Set (EDS) may be defined as a set of the most important data elements that should be
reported on by health service providers on a routine basis, with the aim of being able to generate indicators
that monitor the provision of health services in an integrated manner. It is usually determined by the
National level of the Health Department, in consultation with the service providers. Each province should
determine an EDS that includes the National EDS and additional data elements that are important for the
province. Each Hospital should develop an EDS that includes the Provincial dataset and additional data
elements that are relevant to the hospital management.
REFERENCES
Angell I., Smithson S. (1991). Information Systems Management: Opportunities and Risks, Macmillan,
Basingstoke.
Bell S., Wood-Harper T. (1990). “Cultural aspects of information systems development”, Bhatnagar S.C.
and N. Bjorn-Andersen (eds.), Information Technology in developing countries, North Holland, pp.
23-40.
Forster D., Cornford T. (1995). “Evaluation of Health Information Systems: Issues, Models and Case
Studies”, S.C. Bhatnagar and Odedra M. (eds.), Social Implications of Computers in Developing
Countries, McGraw-Hill.
Hawgood J., Land F. (1988). “A multivalent approach to information systems assessment”, in Information
Systems Assessment: Issues and Challenges, N.B. Andersen and G.B. Davis (eds.), pp. 103-124,
North Holland, Amsterdam.
Hirschheim R., Smithson S. (1999). “Evaluation of Information Systems: a Critical Assessment”, in
Beyond the IT Productivity Paradox, L.P. Willcocks and S. Lester (eds.), John Wiley & Sons.
9
LaFond A., Fields R. (2003). “The PRISM: Introducing an Analytical Framework for Understanding
Performance of Routine Health Information Systems in Developing Countries”, in Proceedings of
the 2nd RHINO Workshop, South Africa.
Lippeveld T., Sauerborn R., Bodard C. (2000). Design and implementation of health information systems,
World Health Organization, Geneva.
Littlejohns P., Wyatt J.C., Garvican L. (2003). “Evaluating Computerized Health Information Systems:
Hard Lessons Still to Be Learnt”, British Medical Journal, (26), April 2003, pp. 860-863.
Mason R., Swanson E. (1981). Measurement for Management Decision, Addison-Wesley, Reading, MA.
South African Demographic and Health Survey (SADHS) 1998. National Department of Health, Medical
Research Council, Macro International, Pretoria, Republic of South Africa.
Smithson S., Hirschheim R. (1998). “Analysing information systems evaluation: another look at an old
problem”, European Journal of Information Systems, Vol. 7, pp. 158-174.
Smithson S., Tsiavos P. (2004). “Re-constructing information systems evaluation”, in C. Avgerou, C.
Ciborra and F. Land (eds.), The Social Study of Information and Communication Technology,
Oxford University Press.
Symons V.J., Walsham G. (1991). “The evaluation of information systems: a critique”, in The economics of
Information Systems and Software, R. Veryard (ed.), pp. 71-88, Buuterworth-Heinemann.
Walsham G., Symons V., Waema T. (1990). “Information system as a social system: implications for
developing countries”, Bhatnagar S.C. and N. Bjorn-Andersen (eds.), Information Technology in
developing countries, North Holland, pp.51-62.
Walsham, G. (1995). "Interpretive case studies in IS research: nature and method." European Journal of
Information Systems Vol 4, pp 74-81.
Wilson R., Gouws M. (2003). “Assessing a Health Information System in Developing Countries”, in
Proceedings of the 2nd RHINO Workshop, South Africa.
Wilson M., Howcroft D. (2000). “The Politics of IS evaluation: A social shaping perspective”, in
Proceedings of 21st ICIS, pp. 94-103.
10
Download