Healthcare Enterprise Process Development and

advertisement
Healthcare Enterprise Process Development and
Integration
Kemafor Anyanwu, Amit Sheth, John Miller, Krys Kochut, Ketan Bhukhanwala
Abstract
Healthcare enterprises consist of many groups and organizations. Many of their activities
involve complex work processes that span groups, organizations, and enterprises. These
processes involve humans and automated activities that are carried out over or supported by
heterogeneous hardware and platforms, and involve a variety of software applications and
information systems. Management of these processes, and their automation is increasingly
playing an important role for improving the efficiency of the healthcare enterprise..
In this paper we look at four very different healthcare and medical applications that involve
investigative, clinical and administrative functions. Using these applications, we investigate
requirements for developing enterprise applications that involve a coordination of a variety of
tasks performed by humans, information systems and databases, and new and legacy
applications. This study leads to the need for two software technologies: enterprise application
integration and workflow management.
We also present the METEOR System that leverages the Java, CORBA and Web technologies to
provide the ability to develop enterprise applications requiring integration and workflow
management. Our success in prototyping these applications, is also due in part to extensive
collaboration with our healthcare industry partners.
1. Introduction
The recent push for healthcare reform has caused healthcare organizations to focus on ways to
streamline their processes and deliver high quality care at low cost. This has precipitated the
reviewing and upgrading of clinical and administrative protocols and the increased use of
computer systems to improve efficiency of these processes. As a result, a wide variety of
computer software systems are currently being used in healthcare organizations such as
1
Application Systems, Document Systems and Workflow Management Systems. This paper
discusses the use of the METEOR System for managing a broad variety of healthcare processes.
Healthcare processes are very complex, involving both clinical and administrative tasks, large
volumes of data and a large number of people, patients and personnel. The care process for one
patient will have different types of processes may interact at points that are not always
predetermined. An out-patient clinic visit may involve administrative tasks performed by an
administrative assistant (like obtaining updated patient information, e.g. address; checking patient
eligibility), clinical tasks performed by a doctor or nurse (like obtaining blood pressure,
temperature, administering a clinical procedure e.g. injection, examination. These processes also
include financial related tasks like, patient billing, which will typically be performed by a
different group like the accounting department. For an in-patient hospital visit, this scenario may
involve many more tasks and a long-running process that lasts at least as long as the duration of
patient hospitalization. These processes are also very dynamic. For example, a care pathway for a
patient with disease condition ‘A’ may evolve to a more complex work process if another disease
condition ‘B’ is discovered in the course of treatment of ‘A’, because both problems may have
different care pathways but may still have to be treated concurrently.
Many workflow products adequately support relatively simple processes involving document and
forms processing, imaging, ad-hoc processes with synchronous and asynchronous processes, etc.
[Fischer 95, Silver 95]. However, they fall short in meeting the requirements of complex, largescale, and mission-critical organizational processes [Sheth 96, Sheth and Joosten 96] such as
healthcare processes. Such processes span multiple organizations (e.g., child immunization
tracking discussed in this paper, also see [Sheth et.al. 96]), run over long periods of time [Dayal
et.al. 91] (e.g., clinical trial process), require more support for dynamic situations (e.g., clinical
pathway process), or require a highly scalable system to support large instances of workflows
(e.g., supporting automated process in a genetics lab [Bonner et.al. 96]). Many of the workflow
applications that support real-world processes also need to be integrated with (or reuse) existing
(legacy) information systems, need to run over distributed, autonomous and heterogeneous
computing environments [Georgakopoulos et.al.. 95], require support for transactional features
(especially recovery) and error handling [Worah et.al. 97], and need varying degree of security
(including authorization, role based access privileges, data privacy, and communication security).
Emerging and maturing infrastructure technologies for communication and distributed object
management (e.g., CORBA/IIOP, DCOM/ActiveX, Java, Notes, and Web) are making it feasible
2
to develop standards-based large scale, distributed application systems. These are, however,
exploited in a very limited way in the current generation of products.
Workflow management and enterprise application integration techniques developed in the METEOR
(Managing End-To-End OpeRations) project at the Large Scale Distributed Information Systems Lab,
University of Georgia (UGA-LSDIS) are intended to reliably support large-scale, complex workflow
applications in real-world, multi-enterprise heterogeneous computing environments. METEOR supports
advanced workflow management capabilities and provides integration capabilities needed to
deploy solutions in highly heterogeneous environments. An important aspect of this project is that the
technology and system development effort at UGA-LSDIS has occurred in close collaboration with its
industry partners. Key healthcare partners have been the Connecticut Healthcare Research and Education
Foundation (CHREF), Medical College of Georgia (MCG), and Advanced Technology Institute. The
collaborations have involved a detailed study of healthcare workflow application requirements, prototyping
of significant healthcare workflow applications with a follow-on trial, and evaluation of METEOR's
technology at the partner's location resulting in technology improvement leading to a commercial product
from Infocosm, Inc. (see: http://infocosm.com).
In section 2, we discuss current generation software support for healthcare processes and
highlight some shortcomings of these systems. Section 3 describes the METEOR System, and
section 4 discusses four healthcare workflow applications prototyped using the METEOR System
and their requirements. Section 5 summarizes the benefits of the METEOR approach; section 6
reviews future work and concludes the paper.
2. Supporting Healthcare Processes With Current Generation
Workflow Systems.
Traditionally, healthcare processes have been managed using some limited forms of workflow.
Examples of these are clinical protocols like critical pathways and administrative protocols like
those for providing clinical referrals. However, these "protocols have remained limited in their
usefulness in part because developers have rarely incorporated both clinical and administrative
activities into one comprehensive care protocol. This lack of integration hinders the delivery of
care, as the effectiveness of protocols is often dependent on many administrative tasks being
properly executed at the correct time" [Chaiken 97]. Consequently, many healthcare
organizations are now turning to workflow management techniques to help with improving the
3
efficiency of their work processes.
This trend to computerize business processes has led to a large number of commercially available
workflow systems, some of which target specifically the healthcare sector. These systems offer
various levels of process support, functionality and robustness. At one end of the spectrum, we
have customized workflow application systems that support human-oriented, vertical group
processes like patient registration. These processes typically involve relatively a few number of
tasks which are executed in a predefined sequence, and require few roles in a single group of an
organization. In these types of applications, the process model is embedded in the application and
customers may need to configure the application in order to tailor it to their specific process.
Some example products are Intrabase's managER with applications to help with patient
assessments, patient tracking and guideline driven patient prioritization in the emergency room,
Tess Products's Tess System for managing patient billing, filling insurance claims, etc. [Intrabase]
These are not general purpose for example they will not have modeling tools to specifiy your
process. Another class of applications at this end of the spectrum focus on supporting patient data
management functions. These applications are usually built on top of Document Management
Systems which are designed to capture, store, retrieve and manage unstructured information
objects such as text, spreadsheets, audio clips, images, video, files, multimedia and compound
documents which may contain multiple objects types. They are used for things like storage of
medical records and patient billing documents. They may also provide email-based routing of
such documents.
At the other end of the process support spectrum, we have workflow management systems, which
are more general –purpose systems that provide tools for process definition, workflow enactment,
and administration and monitoring of workflow processes. The process definition tool allows for
organization to explicitly model their specific business process and the workflow enactment
system supports the automation of this process by coordinating both the manual (performed by a
human) and automated tasks (performed by software applications), and the data flow between the
tasks. Scheduling and routing of tasks and data to the appropriate person or resource at the right
time is done by the enactment system. Some of the workflow systems vendors are FileNet, IBM,
InConcert, Software Ley, StaffWare, TeamWare Group and W4.
Current generation workflow systems adequately support administrative and production
workflows, but are less adequate for some of the more horizontal healthcare processes which
4
have more complex requirements. These types of processes are dynamic, involve different types
of tasks, human-oriented, legacy applications and database transactions, they cross functional and
organizational boundaries where the different groups will have distributed and heterogeneous
computing environments. Support for such processes by these systems is limited, because many
systems have centralized client server architectures, execute in single platform environments, and
support only static processes. They also lack support for features like exception modeling and
handling.
Another very important requirement that these systems seldom provide is an integration
environment. It is clear that the different functional groups of a healthcare organization may
require different types of applications to support their processes. For example, integrating Picture
Archiving Systems (PACS) with hospital or radiology information systems will allow for
radiologists to be presented with collateral patient information like, patient history, clinical
information, symptoms, reasons for presentation and previous examination history, instead of just
a number, or name or image. Such information will greatly aid in the interpretation of images
[DeJesus 98]. In addition, there will be a lot of legacy applications and workflow systems that are
may already be in use in these organizations. This lack of interoperability and integration support
is due to the fact that many current generation workflow systems usually are based closed
proprietary architectures, and do not support interoperability and integration standards such as
CORBA. Some products do provide API for integrating their products with existing systems,
however this requires those existing systems to incorporate these API calls in their programs.
This, of course, is difficult to do in some cases like with legacy systems where the programs will
have to be modified to incorporate these API calls.
The METEOR system was developed specifically to support enterprise wide processes involving
workflow support and enterprise integration. Supporting mission-critical enterprise-wide process
requirements was exactly the basis for the METEOR architecture.
3 THE METEOR SYSTEM:
The METEOR (Managing End to End OpeRations) system is intended to support complex
5
workflow applications which involve heterogeneous and legacy information systems and
hardware/software environments, span multiple organizations, or which require support for more
dynamic processes, error handling and recovery. Collaboration with healthcare industry partners
is integral to our work, and involves real world application prototyping and METEOR technology
evaluation as part of trials conducted by the healthcare and industry partners. METEOR includes
all of the necessary components to design, build, deploy, run and monitor workflow applications.
METEOR provides four main services:

METEOR Builder Service which allows users to graphically specify their workflow
process;

METEOR Enactment Services: provides two enactment services that support the execution
of workflows, ORBWork and WebWork. Both ORBWork and WebWork use a fully
distributed open architecture, but WebWork [Miller et.al.] is a comparatively light weight
implementation that is well suited for traditional workflow, help-desk, and data exchange
applications. ORBWork [Kochut et.al.] is better suited for more demanding, mission-critical
enterprise applications requiring high scalability, robustness and dynamic modifications;

METEOR Repository Service: provides tools for storage and retrieval of workflow process
definitions that may used in the Builder tool for rapid application development, and is also
available to the enactment services to provide the necessary information about a workflow
application to be enacted;

METEOR Manager Services: The management service support monitoring and
administering workflow instances as well as configuration and installation of the enactment
services.
3.1 METEOR Architecture:
The main components in the execution environment are the Workflow Scheduler, Task
Managers and Tasks (Figure 1). To establish global control as well as facilitate recovery and
monitoring, the task managers communicate with a scheduler. It is possible for the scheduler to
be either centralized or distributed, or even some hybrid between the two [Wan 95, Miller et.al.
96].
6
DESIGNER
WORKFLOW
MODEL
REPOSITORY
MONITOR
AUTOMATIC CODE GENERATION
TASK
Mgr.
TASK
Mgr.
TASK
AND
TASK
TASK
Mgr.
TASK
DB
TASK
Mgr.
TASK
WEB /
CORBA
Figure 1: METEOR System Architecture
To facilitate monitoring and control of tasks, each task is modeled using a well-defined task
structure at the design time. [Attie et.al. 93, RS95, Krishnakumar et.al. 95]. A task structure
simply identifies a set of states and the permissible transitions between those states. Several
different task structures have been developed – transactional, non-transactional, compound, twophase commit (2PC) [Krishnakumar et.al. 95, Wan 95].
To more effectively manage complexity, workflows may be composed hierarchically. The top
level is the overall workflow, below this are sub-workflows (or compound tasks), which
themselves may be made up of sub-workflows or simple tasks. Coordination between tasks is
accomplished by specifying intertask dependencies using enable clauses. An enable clause may
have any number of predicates. Each predicate is either a task-state vector or a boolean
expression. A task-state vector contains the state of all the tasks that are incident on the current
task. When one task leaves a given state it may enable another task to enter its next state.
The next sections describe the METEOR components in detail.
3.1.1 METEOR Builder Service:
The METEOR Builder service is used for Process Modeling and Workflow Design. The Process
7
modeling mode focuses on understanding the structure of the business process, and does not
consider any details of implementing the process. At the workflow design stage, details that will
enable the implementation of the process are added to the process model.
The METEOR Builder Service consists of a number of components that are used to graphically
design and specify a workflow. Its three main components are used to specify the entire map of
the workflow which express the ordering of tasks in the workflow and the dependencies among
them, the data objects manipulated by the workflow, as well as the details of task invocation,
respectively. The task design component provides interfaces to external task development tools
(e.g., Microsoft’s FrontPage to design the interface of a user task, or a rapid application
development tool). This service allows modeling of complex workflow at a level that abstracts
the user from the underlying details of the infrastructure or the runtime environment, while
imposing very few restrictions the workflow specification. The workflow specification created
using this service is stored by the METEOR Repository service and includes all the predecessorsuccessor dependencies between the tasks as well as the data objects that are passed among the
different tasks.
It also includes definitions of the data objects, and the details of the task
invocation details. The specification may be formatted to be compliant with the Workflow
Process Definition Language (WPDL) of the Workflow Management Coalition [WfMC]. This
service assumes no particular implementation of the workflow enactment service (runtime
system). Its independence from the runtime supports separating the workflow definition from the
enactment service on which it will ultimately be installed and used. Detailed information
concerning this service (earlier referred to as METEOR Designer, MTDes, is given in [Lin 97,
Zheng 97]. The data design component allows the user to specify data objects that are used in the
workflow, as well as relationships, e.g. inheritance, aggregation, between the data objects.
A new version of the METEOR Builder service is currently being implemented using Java. This
will allow users to interact with process definitions at runtime and specify dynamic changes to
specification if needed. The new Builder service also outputs an XML based representation of
process definitions. It will also have some additional features like support for exception
modeling.
3.1.2 METEOR Enactment Services
The task of the enactment service is to provide an execution environment for processing
workflow instances. At present, METEOR provides two different enactment services: ORBWork
8
and WebWork. Each of the two enactment services has a suitable code generator that can be used
to build workflow applications from the workflow specifications generated by the building
service or those stored in the repository. In the case of ORBWork, the code generator outputs
specifications for task schedulers, including task routing information, task invocation details, data
object access information, user interface templates, and other necessary data. The code generator
also outputs the code necessary to maintain and manipulate data objects created by the data
designer. The task invocation details are used to create the corresponding “wrapper” code for
incorporating legacy applications with relative ease. Details of code generation for WebWork are
presented in [Miller et.al.. 98].
ORBWork addressess a variety of shortcomings found in today’s workflow Systems by
supporting the following capabilities:

provides an enactment system capable of supporting dynamic workflows,

allow significant scalability of the enactment service,

support execution over distributed and heterogeneous computing environments within and
across enterprises,

provide capability of utilizing or integrating with new and legacy enterprise applications
and databases 1 in the context of processes,

utilize open standards, such as CORBA due to its emergence as an infrastructure of choice
for developing distributed object-based, interoperable software,

utilize Java for portability and Java with HTTP network accessibility,

support existing and emerging workflow interoperability standards, such as JFLOW
[JFLOW] and SWAP [SWAP], and

provides standard Web browser based user interfaces, both for the workflow endusers/participants as well as administrators of the enactment service and workflows.
One of the design goals of ORBWork has been to provide an enactment framework suitable for
supporting dynamic and adaptive workflows. Currently, ORBWork does not concern itself with
the issues of correctness of the dynamically introduced changes as those are handled outside of
the enactment system. However the architecture of the enactment system has been designed to
Data integration capability is supported by integrating METEOR’s enactment services with I-Kinetics’s
DataBroker/OpenJDBC, a CORBA and Java based middleware for accessing heterogeneous and legacy
data sources.
1
9
easily support dynamic changes and serve as a platform for conducting research in the areas of
dynamic and collaborative workflows. The use of a fully distributed scheduler in ORBWork
warrants the need to provide the scheduling information to all participating task schedulers at
runtime. The scheduling information includes details of transitions leading into and out of the
schedulers and also the list of data objects to be created.
ORBWork’s scheduler is composed of a number of small schedulers, each of which is responsible
for controlling the flow of workflow instances through a single task. The individual schedulers
are called task schedulers. In this way, ORBWork implements a fully distributed scheduler in that
all of the scheduling functions are spread among the participating task schedulers that are
responsible for cooperating task schedulers. Each task scheduler controls the scheduling of the
associated task for all of the workflow instances “flowing” through the task. Each task scheduler
maintains the necessary task routing information and task invocation details.
At startup each task scheduler requests scheduling data and upon receiving it configures itself
accordingly. Furthermore, the configuration of an already deployed workflow is not fixed and can
be changed dynamically. At any given time, a workflow designer, or in some permitted cases an
end-user, may decide to alter the workflow. The introduced modifications are then converted into
the corresponding changes in the specification files and stored in the repository. A “reload
specification” signal is then sent to the affected task schedulers. As a result, the schedulers reload
their specifications and update their configurations accordingly, effectively implementing change
to the existing workflow.
The fully distributed architecture of ORBWork yields significant benefits in the area of scalability
already mentioned, all of the workflow components of a designed and deployed workflow may be
distributed to different hosts. However, in practice it may be sufficient to deploy groups of less
frequently used task scheduler/manager/programs to the same host. At the same time, heavily
utilized tasks may be spread out across a number of available workflow hosts, allowing for
greater load sharing.
The features of ORBWork designed to handle dynamic workflows are also very useful in
supporting scalability As load increases, an ORBWork administrator may elect to move an
portion of the currently running workflow to a host (or hosts) that become available for use in the
workflow. The migration can be performed at the time the deployed workflow is running. Simply,
10
the workflow administrator may suspend and shutdown a given task scheduler and transfer it to a
new host. Because of the way task schedulers locate their successors, the predecessors of the
moved task scheduler will not notice the changed location of the task.
The distributed design of ORBWork offers no single point of failure for an ongoing workflow
instance. Since the individual task schedulers cooperate in the scheduling of workflow instances,
a failure of a single scheduler does not bring the whole system down, and other existing workflow
instances may continue execution.
The error handling and recovery framework for ORBWork (initial design has been described in
[Worah et.al. 97]) has also been defined in a scalable manner. All errors are organized into error
class hierarchies, partitioning the recovery mechanism across local hosts, encapsulating and
handling errors and failures as close to the point of origination as possible, and by minimizing the
dependence on low –level operating system-specific functionality of the local processing entities.
Details of ORBWork can be found [Kochut et.al. 98].
3.1.3 METEOR’s Repository Service:
The METEOR Repository Service is responsible for maintaining information about workflow
definitions and associated workflow applications. The graphical tools in the workflow builder
service communicate with the repository service and retrieve, update, and store workflow
definitions. The tools are capable of browsing the contents of the repository and incorporating
fragments (either sub-workflows or individual tasks) of existing workflow definitions into the one
being currently created. The repository service is also available to the enactment service (see
below) and provides the necessary information about a workflow application to be started.
The current implementation of the repository service implements the Interface 1 API, as specified
by WfMC [WfMC]. A detailed description of the first design and implementation of this service
is presented in [Yong 98]
3.1.4 METEOR’s Management Services:
The tools provided by these services are used for administering and monitoring workflows. The
METEOR Monitor provides a tool for querying for and viewing the state of workflow instances.
The Administration service is used by workflow adminstrator to perform management functions
11
like installing and configuring workflow instances, load-balancing, modifying a workflow
process while it is still executing, etc.
4 Significant HealthCare Applications Prototyped Using
METEOR
The healthcare sector has a number of different types of organizations. By healthcare sector we
mean both hospital and non-hospital based organizations. Examples of non-hospital based
organization are the pharmaceutical companies, laboratories, etc.,. All these organizations have
different types of processes and different requirements. Table 1 gives a summary of the different
types of processes, the applications that support them, and their requirements.
Processes
Hospital
Based
Clinical
Example
Applications
Charting,
Scheduling,
Discharge
Summaries,
Reports.
Ordering Systems
Non-Clinical
(Administrative and
Financial)
NonHospital
Based
(radiology,
pharmacy)
Laboratory
Patient Management
(billing, accounts
receivable, claims
filing)
Laboratory
Information Systems.
Pharmaceutical
Industry
Clinical Drug Trial
Management
Requirements
Integration with patient data management
software; Management of human and
automated activities; Exception handling;
Ease of Use; Support for Dynamic Changes;
Security; Role-Based Authorization
Data Management and Integration;
Application Integration; Support for
Heterogenous and Distributed Environments;
Security; Support for standards, EDI, HL7;
Exception handling
Scalability; Exception handling; Management
of complex data types; Transactional
Workflows; Integration with other systems;
Support for HAD environments
Distributed Environment; Scalability;
Exception Handling
Table 1: Healthcare Processes and Applications
The rest of this section describes four applications that we have prototyped using METEOR.
These applications support different types of processes, varying in scale, i.e., number of tasks and
12
roles, and requirements ranging from single server to multiple distributed web servers, workflow
execution across different workflow system installations, and integration of legacy applications
and databases. The first three applications, Neonatal Clinical Pathways, GeneFlow and Eligibility
Referral are briefly sketched, highlighting requirements and implementation strategies.
The fourth application, Immunization Tracking, is a more comprehensive application and is
discussed in more detail. (The figures in this section are screen shots from the METEOR Builder
Service.)
4.1 Neonatal Clinical Pathways
A low birth-weight baby with underdeveloped organs is normally considered to be at risk for a
number of medical problems. To monitor the development of the babies, their development is
processed through several clinical pathways. Three of the major pathways are the Head
Ultrasound, Metabolic and Immunization pathways. When a human-dependent approach is used
for tracking patients, errors occur and some patients fall through the cracks because the necessary
tests are not performed on time. To automate the scheduling of procedures at appropriate times
and eliminate such errors, a METEOR workflow application was developed for the Neonatal
Intensive Care Unit (NICU) at the Medical College of Georgia.
The Figure 2 shows the graphical representation of the Head Ultrasound pathway. Here, an initial
ultrasound is done when the baby arrives at the NICU, and is repeated at specified intervals over a
period of weeks. The duration depends on whether test results indicate an improvement in the
baby’s condition. The application issues reminders for scheduling tests, retrieving test results, and
updating patient records, to the nurse responsible for tracking this data.
13
Figure 2: Head UltraSound Pathway
Requirements
The workflow process involves a single organization, three roles, and a single database. Some of
the requirements for this process, like timing or temporal constraints, etc., are not supported by
current generation workflow products. Advanced features like support for heterogenous and
distributed environments and integration of legacy applications were not requirements here.
Implementation:
This application was developed using the WebWork enactment service of the METEOR system.
It is a web enabled client server application that uses a single web server and an Oracle database.
The communication infrastructure between the different tasks within the workflow uses the web,
specifically CGI scripts. The end-user interface is a web browser.
14
Neonatal Clinical Pathways
Nurse
Nurse Coordinator
Neonatologist
Server
Figure 3 : Implementation for NeoNatal Clinical Pathways
4.2 GeneFlow
GeneFlow is being developed specifically for the needs of the Fungal Genome Initiative. This is a
multi-institution consortium of research groups which is mapping and sequencing the genomes of
important fungal organisms. GeneFlow is a workflow application to handle the needs of data
analysis for genome sequencing. Raw "shotgun" DNA sequence data consists of short (< 1500
characters) overlapping DNA sequences. This data comes from automatic sequencing machines.
From this raw data the short overlapping shotgun sequences must be synthesized into larger
contiguous sequences of whole chromosomes. The sequence from an entire chromosome may be
many millions of characters long. These larger sequences are searched for probable genes and
other chromosomal features. Probable genes are then analyzed to determine a probable function
as well as to identify smaller scale features within the gene. These results then must be
electronically published. The end result of all this is a completely annotated genome that is public
domain.
15
Figure 4: Workflow Design for GeneFlow
METEOR tools are being used to wrap genome data analysis applications together in a "genome
data assembly line".
Requirements
In this application, multiple and heterogenous servers (SGI, Solaris) are used with a single
database and a single workflow system. The process required many human and automated tasks,
support for legacy applications and Web-based access to support geographically distributed users.
16
Multi-institutional Genome Sequencing
Users in different labs worldwide
Solaris
Server
SGI
Server
Legacy App
Legacy App
Georgia
Figure 5: System Architecture for Geneflow
Implementation:
This application was also developed using the WebWork. Integration of legacy application on
both the Solaris and the SGI servers was done by writing C++ wrappers for the legacy tasks.
These wrappers were then easily integrated with the WebWork enactment service.
4.3 Eligibility Referral
The Eligibility Referral Application was developed for the Connecticut Healthcare
Research and Education Foundation (CHREF) to support the process of transferring a patient
from one hospital to another. It involves three organizations, two hospitals and an insurance
company.
The design depicted in Figure 3 shows a consolidated workflow including both the sending and
the receiving hospitals. The workflow starts with the sending hospital trying to determine the
right
17
Figure 6: Eligibility Referral Workflow
placement for the patient that needs to be sent out. Once this is done, the next few tasks involve
determining the eligibility information, obtaining the necessary payment information and also
getting the physicians signature for the specified patient. The final step in the sending hospitals’
workflow, is to send and receive an acknowledgement from the receiving hospital, indicating that
will accepting the patient there. Once this is done the sending hospital can update their database
and receiving hospital takes over from there. The receiving hospital also has its own workflow for
processing transferred patients.
A workflow instance spans the two hospitals with interaction with the insurance company
through EDI transactions.
18
Eligibility and Referral
Insurance
company
Sending
organization
Receiving
organization
Figure : 7 Implementation testbed for Eligibility Referral
Requirements
Support for distributed and heterogeneous environments. Workflow execution across multiple
installations of the METEOR system. Multiple databases and web-servers. Support for
heterogeneous tasks such as human, automated and transactional tasks with EDI transactions.
Implementation:
The sending and receiving hospitals have separate METEOR system installed and a single
workflow instance executes across both the hospitals. Each hospital hosts its own web server and
a database e.g. the sending hospital needs to find data about the patient to verify eligibility
information, to obtain physician signature etc. The same applies to the receiving hospital.
4.4 State-Wide Immunization Tracking Application:
The Immunization Tracking Application has the most advanced requirements of all four examples
19
discussed. With managed healthcare coming of age, performance monitoring such compulsory
performance reporting, immunization tracking, childbirth reporting, etc. have become important
rating criteria. In fact, the first item listed under Quality of Care in the Health Plan Employer
Data and Information Set (HEDIS) [HEDIS] is childhood immunization rate. Healthcare
resources must be used efficiently to lower costs while improving the quality of care. To
accomplish these goals, processes in the managed healthcare industry need to be computerized
and automated.
Figure 8 shows the scope of the application in schematic form. The process
includes on-line interactions for the workflow application between CHREF (as the central
location), healthcare providers (Hospitals, Clinics, home healthcare providers) and user
organizations (State Department of Health (SDOH), Schools, Department of Social Services
(DSS)).
CLINICAL SUBSYSTEM
Generates:
• Alerts to identify
patient’s needs.
• Contraindications
to caution
providers.
Reminders
to parents
Health providers can obtain up-to-date
clinical and eligibility information
Hospitals and clinics
update central
databases after
encounters
Health agencies can
use reports generated
SDOH and
to track
CHREF
population’s needs
maintain
databases,
State and
support EDI HMO’s can
transactions update
C
T
Reports to state
Hospitals and
case workers
can reach
out to the
population
patient’s
TRACKING SUBSYSTEM
eligibility data
HMOs can keep
track of
performance
Figure 8: Schematic for Immunization Tracking Application
The system can be best explained in terms of the following three components, the databases, the
Clinical and Tracking Subsystems.
20
Databases:
The workflow application system supports the maintenance of the following central databases:

The Master Patient Index (MPI) records the personal information and medical history of
patients.

The Master Encounter Index (MEI) records brief information pertaining to each encounter of
each patient at any hospital or clinic in the state.

An Immunization Database (IMM) records information regarding each immunization
performed for a person in the MPI,

Eligibility databases (ELG) provides an insurance company’s patient eligibility data, and

The Detailed Encounter databases (ENC) provides detailed encounter information as well as
relevant data for each patient served by a particular provider.
The current application design and implementation calls for CHREF to manage the MPI, MEI
and the IMM centralized databases for the state of Connecticut. The ELG database containing
data provided by insurance companies is used in this application for EDI based eligibility
verification using the ANSI X12 standard. In particular, Web-based access is used to submit
eligibility inquiries using ANSI 270, and responses are received using ANSI 271 “transactions”.
Each participating provider organization (hospital or clinic) has its own ENC database. When a
structured database is used at CHREF, we promote the use of ODBC compliant DBMS. The
DBMS’s (Illustra and Oracle) used in this application are ODBC compliant.
The Clinical SubSystem
The clinical subsystem has been designed to provide the following features:

Roles for Admit Clerk, Triage Nurse, Nurse Practitioner and Doctor

Worklist for streamlining hospital and clinic operations

Automatic generation of Medical Alerts (e.g. delinquent immunizations) and Insurance
Eligibility Verification by the Admit Clerk, and

Generation of contraindications for patients visiting a hospital or clinic to caution medical
personnel regarding procedures that may be performed on the patient.
21
The Tracking SubSystem
Health agencies can use the data available to generate reports (for submissions to authorities like
the DSS, State Government, etc.), and for determining the health needs of the state. More
importantly, immunization tracking involves reminding parents and guardians about shots that are
due or overdue and informing field workers about children who have not been receiving their
immunizations.
List of overdue
vaccinations
Link to contraindication info obtained
from the Internet
Clinical update to “administer vaccination”
Figure 9: Provider’s Interface for Immunization Recommendation
Task
Requirements of the Application
The development of the application was completed with constraints due to end-user and system
requirements.
Some of the important requirements for this application, as determined by CHREF, include

Support for a distributed client/server-based architecture in a heterogeneous computing
environment. At the level of any user of the system, this distribution should be transparent.

Support for inter- and intra- enterprise wide coordination of tasks.

Provision of a standard user-friendly interface to all users of the system.

Support for a variety of tasks: transactional, non transactional user and application

Capability for using existing DBMS infrastructure across organizations.
22

Low costs of system for the providers and user organizations

Ease of modification (re-design), scalability, extensibility and fast design-to-implementation.

Use of standards, including EDI for interactions between autonomous organization where
possible

Security authorization for user and secure communication (required as patient data is
typically confidential)

Web-based user interfaces for healthcare personnel such that shown in Figure 9.
Administrator
Case Worker
Admit
Clerk
Triage
Nurse
Doctor/
NP
Maternity
Ward
Om (SunSparc 20/Solaris)
Illustra DBMS
MPI
MEI
Immunization
Database
CORBA (ORBeline)*
Hospital
Info System
Web Server
Optimus
(SunSparc 2/Solaris)
CHREF
Iris (Pentium/
Windows NT)
Web Server
Oracle 7 DBMS
Ra (SunSparc 20/Solaris)
Encounter
Database
Hospital
EDI
CHREF/SDOH
Clinic
Office Practice
Mgmt System
Om (SunSparc 20/Solaris)
Illustra DBMS
Insurance
Eligibility Database
Web Server
Admit
Clerk
Triage
Nurse
Doctor/
NP
Encounter
Files/Databases
Figure 10: Implementation Testbed for the Immunization Tracking Application
Based on these requirements, we have created a system test-bed on which the application is
implemented (figure 10). This includes a heterogeneous communication infrastructure, multiple
Web servers, CORBA) and multiple databases. The workflow involves 13 tasks (e.g. tasks for the
admit clerk, triage nurse, eligibility check, etc.), heterogeneous computers in Georgia and
Connecticut (Web and DBMS servers on Sparc20/Solaris 2.4, Sparc5/Solaris2.4, and personal
computers running Windows/NT as well as several PC/Windows95 Clients), and five databases
on two DBMS (Illustra, ORACLE).
23
5 Benefits of the METEOR Approach:
5.1 Advanced capabilities and unique features
Example
Application
All
Capability
Benefit
Graphical building of complex
applications
All except
Clinical
Pathways
Support for heterogeneous, distributed
computing environment; open-systems
architecture and use of standards
All
Automatic code generation
All
Integration of human and automated
activities
Fully distributed scheduling
Ability to visualize all application components;
reduced needs for expert developers; rapid
deployment
Seamless deployment over networked heterogeneous
(Solaris and NT) server platforms; ease of
integration of legacy/existing applications; appeal
customers preferring non-proprietary and multivendor solutions
Significantly reduced coding and corresponding
savings in development cost; reduced need for
expert developers; rapid deployment
Natural modeling of complex business
activities/processes
High scalability and performance, minimal single
point of failure.
All except
Clinical
Pathways &
GeneFlow
None
All
Dynamic changes
Traditional security
Rapidly adapt to changes in business processes
Support for roles, security on open Internetworking
Eligibility
Referral & IZT
Eligibility
Referral & IZT
Database middleware support (either
JDBC or I-Kinetics’ OpenJDBC)
Workflow interoperability standards
None
Transaction support, Exception-handling
and automatic recovery, survivability
Different levels of security (roles,
authorization, network)
Component repository
Simplified access to heterogeneous variety relational
databases on servers and mainframes
Integration with other vendor’s products,
interoperability in
multi-vendor and inter-enterprise applications such
as e-commerce
7x24 operation, mission-critical processes
All
None
Flexible support for broad range of security policies
XML-based reusable application components for
rapid
development of new applications
Table 2 :Benefits of the METEOR Approach
METEOR offers unique solutions to realize the promise of recent advances in distributed
24
computing infrastructure, middle-ware and Web technologies. By quickly integrating applications
and information systems to support complex and dynamic business process management,
METEOR provides technical advantages that distinguish it from other systems.
These
distinctions are outlined in Table 2 for the example applications discussed in Section 4.
Additionally, the capabilities are defined fully in following sections.
Automatic Code Generation:
Once the workflow designer has designed the workflow using the METEOR Builder Service,
very few additional steps are required to implement the workflow. This is of utmost importance to
the designer. If the workflow designer has spent enough time studying the application and
designing the workflow, then his job in implementing the workflow reduces to a bare minimum
of writing application-specific tasks for the application at hand. This frees the designer from
having to worry about the modes of communication or data passing between existing tasks.
For example, the top part of figure 11 shows the graphical design of part of the Immunization
Tracking workflow. The bottom part shows the components automatically generated from the top
design which include CORBA objects for Task Managers, Task Schedulers and Data Objects, as
CORBA-based Implementation
Admit Clerk Task
Database
Stop
Start
Workflow Design
Implementation
Enter Patient Generate
Info.
Alerts
Patient Data (CORBA)
Generate
Alerts
Enter Patient
CORBA
Triage Nurse Task
Update Personal
Data Update Local
Start
Check
Eligibility
Update
Personal
Data
Updated
Results
N
CORBA
CORBA
....
Collect
Vitals
CORBA
Update
Add to
Local
Database Worklist
HTTP
CORBA
N
DBMS
N
N
CORBA
Enter Patient Info.
Alert Results Check
Eligibility Results
Control Flow
Eligibility
N Web Page
Submit Button
Machine Boundary
Worklist Handler
Figure 11: Code Generation for the Immunization Tracking Application
25
well as HTML pages used as user interfaces.

A fully distributed System:
One of the important requirements in almost any modern industry is the capability to support
machines that are spread across the entire organization. Both of the enactment systems of
METEOR have a fully distributed architecture. This provides support for load balancing among
all the participating host machines, and also eliminates the existence of a single point of failure
within the system.

Use of standards and the latest distributed computing infrastructures for heterogeneous
environments:
The METEOR System closely follows the standards set by bodies such as the WFMC and the
Object Management Group (OMG). For example, the workflow specification created using the
METEOR Builder service is XML-based and may easily be formatted to be compliant with the
Workflow Process Definition Language (WPDL) of the WFMC. METEOR also supports the
workflow interoperability standards such as JFLOW [JFLOW] and SWAP [SWAP]. In addition,
METEOR utilizes the open standards, such as CORBA due to its emergence as an infrastructure
of choice for developing distributed object-based, interoperable software.

Integration capabilities with legacy and new applications, and multiple/heterogeneous
remote databases including mainframe data:
The support that METEOR provides in integration of legacy application is a big distinction point
for the system. The CORBA based enactment system (ORBWork) is implemented in Java which
provides the system with portability and network accessibility. It also uses Java Database
Connectivity to connect to a variety of databases and now with the availability of Databroker [IKinetics], data from databases that reside on mainframes can easily be integrated with the
METEOR system.

Security:
METEOR provides various levels of security from role-based access control and authentication to
multi-level security as is required in military institutions.
26
6. Conclusion and Future Directions
Today’s healthcare applications require capabilities for enterprise integration and mission-critical
workflow process support. While such functionality is found lacking in many of the current
generation workflow systems, the METEOR system distinguishes itself by focusing on
supporting these high-end requirements.
The METEOR approach enables rapid design-to-development via automatic code generation; the
workflow model and enactment system support a variety of activities - user and application
(automated) tasks, thereby making it feasible for use in real-world organizational processes. The
workflow engines support heterogeneous, distributed computing environment with standards
based computing infrastructure (Web and/or CORBA). This allows workflow process and data to
be distributed within and across enterprises. Reliability is an inherent part of the WFMS
infrastructure. The WFMS includes support for error handling and recovery by exploiting
transaction management technology.
A well-defined hierarchical error-model is used for
capturing and defining logical errors, and a recovery framework provides support for detection
and recovery of workflow system components in the event of failure.
Currently, the METEOR project group is working to extend the system in a number of ways.
1. The development to an interactive Java-based Builder Service, which renders XML-based
workflow descriptions, allows users to modify workflow definitions at run time, and has
ability to validate workflow specification at design time.
2. Extend process support to include collaboration by integrating with technologies like videoconferencing.
3. Currently, METEOR provides task managers for task functionality at a high level of
granularity e.g., Transactional Task Manager for Database transaction tasks, HumanComputer Task Manager for computer assisted human tasks, etc. We are considering
implementing the Task Managers based on Enterprise JavaBeans technology. This will
enable support for a wide variety of function specific tasks, e.g. a Task Manager for EDI
tasks.
4. Incorporate advanced exception handling techniques like defeasible workflow and casedbased reasoning [ Luo et.al. 98].
Acknowledgements
METEOR team consists of Kemafor Anyanwu, Ketan Bhukhanwala, Jorge Cardoso, Prof. Krys
Kochut (co-PI), Zhongqiao Li, Zonghwei Luo, Prof. John Miller (co-PI), Kshitij Shah, Prof. Amit
27
Sheth (PI), Key past contributors include: Souvik Das, David Lin, Arun Murugan, Devanand
Palaniswami, Richard Wang, Devashish Worah, and Ke Zheng.
This research was partially done under a cooperative agreement between the National Institute of
Standards and Technology Advanced Technology Program (under the HIIT contract, number
70NANB5H1011) and the Healthcare Open System and Trials, Inc. consortium. See URL:
http://www.scra.org/hiit.html. Additional partial support and donations are provided by Iona,
Informix, Hewlett-Packard Labs, and Boeing.
Bibliography:
[Attie et.al. 93] P. Attie, M. Singh., A. Sheth., M. Rusinkiewicz., “Specifying and Enforcing
Intertask Dependencies “, in Proc. of the 19th Intl. Conf. on Very Large Data Bases, August,
1993
[Bonner et.al.. 96] A. Bonner, A. Shruf, S. Rozen, “Database Requirements for Workflow
Management in a High-Throughput Genome Laboratory”. In Proceedings of the NSF Workshop
on Workflow and Process Automation in Information Systems: State-of-the-Art and Future
Directions. May, 1996
[Chaiken 97] B.Chaiken,“Workflow in Healthcare”,
http://www.araxsys.com/araxsys/resources/workflow.pdf
[Cichocki et.al.. 97] A. Cichocki and M. Rusinkiewicz, “Migrating Workflows”, Advances in
Workflow Management Systems and Interoperability, Istanbul, Turkey, August 1997.
[Das et.al.. +97] S. Das, K. Kochut, J. Miller, A. Sheth, and D. Worah, ORBWork: A Reliable
Distributed CORBA-based Workflow Enactment System for METEOR2, Technical Report UGACS-TR-97-001, LSDIS Lab, CS Department, Univ. of Georgia, February 1997.
[Dayal et.al.. 91] U. Dayal, M. Hsu, and Ladin, R. “A Transactional Model for Long-running
Activities. In Proceedings of the Sixteenth International Conference on Very Large Databases, pg
113-122, August 1991
28
[DeJesus 98] X. DeJesus, “Integrating PACS Power, Healthcare Informatics”, Vol. 15, (20), pg.
97. November 1998
[Ellis et.al.. 95] C. Ellis, K. Keddara, and G. Rozenberg, “Dynamic Changes within Workflow
Systems”, In Proc. of the Conf. on Organizational Computing Systems (COOCS’95)}, 1995.
[Georgakopoulos et.al.. 95] D. Georgakopoulos, M. Hornick and A. Sheth, “An Overview of
Workflow Management: From Process Modeling to Infrastructure for Automation”, Journal on
Distributed and Parallel Database Systems, 3 (2), 1995
[Guimaraes et.al.. 97] N. Guimaraes, P. Antunes, and A. Pereira, The Integration of Workflow
Systems and Collaboration Tools, Advances in Workflow Management Systems and
Interoperability, Istanbul, Turkey, August 1997.
[Han 97] Y. Han, "HOON - A Formalism Supporting Adaptive Workflows," Technical Report
#UGA-CS-TR-97-005, Department of Computer Science, University of Georgia, November
1997.
[Han et.al. 98] Y. Han and A. Sheth, "On Adaptive Workflow Modeling," the 4th International
Conference on Information Systems Analysis and Synthesis, Orlando, Florida, July, 1998
[Hermann 95] T. Hermann, Workflow Management Systems: Ensuring Organizational Flexibility
by Possibilities of Adaptation and Negotiation, In Proceedings of the Conference on
Organizational Computing Systems (COOCS’95)}, 1995.
[Jablonski et.al.. 97] S. Jablonski, K. Stein, and M. Teschke, Experiences in Workflow
Management for Scientific Computing, In Proceedings of the Workshop on Workflow
Management in Scientific and Engineering Applications (at DEXA97), Toulouse, France,
September 1997.
[Kaldoudi et.al. 97] E. Kaldoudi, M. Zikos, E. Leisch, and S.C. Orphanoudakis, Agent-Based
Workflow Processing for Functional Integration and Process Re-engineering in the Health Care
Domain, in Proceedings of EuroPACS’97, Pisa, Italy pp. 247-250 September , 1997
29
[Kochut et.al.. 98] K. Kochut, A. Sheth, J. Miller, "ORBWork: a CORBA-Based Fully
distributed, Scalable and Dynamic Workflow Enactment Service for METEOR," Technical
Report UGA-CS-TR-98-006, Department of Computer Science, University of Georgia, 1998.
[Krishnakumar et.al. 95] N. Krishnakumar and A. Sheth, “Managing Heterogeneous Multisystem Tasks to Support Enterprise-wide Operations,” Distributed and Parallel Databases Journal,
3 (2), April 1995
[Infocosm] Infocosm home page, http://infocosm.com
[Intrabase] Intrabase home page, http://www.intrabase.com/
[JFLOW] OMG jFlow Submission, ftp://ftp.omg.org/pub/docs/bom/98-06-07.pdf
[Lin 97] C. Lin, “A Portable Graphic Workflow Designer,” M.S. Thesis, Department of
Computer Science, University of Georgia, May 1997.
[Luo et.al. 98] Z. Luo, A. Sheth, J. Miller, K. Kochut, “Defeasible Workflow, Its Computation
and Exception Handling”, CSCW 98, Towards Adaptive Workflow Systems Workshop, Seattle,
WA. 1998.
[Marietti 98] C. Marietti, “Love V Love? Hardly, as ActiveX and CORBA volley for
Leadership.” Healthcare Informatics, Vol. 15, (20), pg. 56. November 1998
[McClatchey et.al.. 97] R. McClatchey, J. M. Le Geoff, N. Baker, W. Harris, and Z. Kovacs, A
Distributed Workflow and Product Data Management Application for the Construction of Large
Scale Scientific Apparatus, Advances in Workflow Management Systems and Interoperability},
Istanbul, Turkey, August 1997.
[MEDITECH] MEDITECH home page, http://www.meditech.com
[METEOR] METEOR project home page, http://lsdis.cs.uga.edu/proj/METEOR/METEOR.html
30
[Miller et.al. 96] J. Miller, A. Sheth, K. Kochut, and X. Wang,. “CORBA-based Run Time
Architectures for Workflow Management Systems”. Journal of Database Management,
Special Issue on Multidatases, 7(1), 1996.
[Miller et.al.. 98] J. Miller, D. Palaniswami, A. Sheth, K. Kochut, H. Singh “WebWork:
METEOR’s Web-based Workflow Management System”, Journal of Intelligent Information
Systems, (JIIS) Vol. 10 (2), March 1998.
[Reichert et.al. 98] M. Reichert and P. Dadam, “ADEPT flex: Supporting Dynamic Changes of
Workflows Without Losing Control”, Journal of Intelligent Information Systems, 10 (2), March
1998.
[Sheth et.al.. 1998] A. Sheth and K. Kochut, “Workflow Applications to Research Agenda:
Scalable and Dynamic Work Coordination and Collaboration Systems”, in Workflow
management Systems and Interoperability, A. Dogac et.al. (eds), Springer Verlag, 1998
[Sheth 97] A. Sheth, “From Contemporary Workflow Process Automation to Adaptive and
Dynamic Work Activity Coordination and Collaboration,” Proceedings of the Workshop on
Workflows in Scientific and Engineering Applications, Toulouse, France, September 1997.
[Sheth et.al.. 96] A. Sheth, D. Georgakopoulos, S. Joosten, M. Rusinkiewicz, W. Scacchi, J.
Wileden, and A. Wolf, Eds. Report from the NSF Workshop on Workflow and Process
Automation in Information Systems, Technical Report UGA-CS-TR-96-003, Dept. of Computer
Sc., University of Georgia, October 1996. http://lsdis.cs.uga.edu/lib/lib.html
[Sheth et.al. 97] A Sheth, D. Worah, K. Kochut, J. Miller, K. Zheng, D. Palaniswami, S. Das,
"The METEOR Workflow Management System and its use in Prototyping Healthcare
Applications", In Proceedings of
the Towards An Electronic Patient Record(TEPR'97)
Conference, April 1997, Nashville, TN.
[Sheth and Kochut 98] A. Sheth and K. Kochut, “Workflow Applications to Research Agenda:
Scalable and Dynamic Work Coordination and Collaboration Systems,” in A. Dogac, L.
Kalinechenko, T. Ozsu and A. Sheth, Eds. Workflow Management Systems and Interoperability,
31
NATO ASI Series F, Vol. 164, Springer Verlag, 1998.
[SQL*Flow] 170 Systems home page, http://www.170systems.com/sqlflow.htm
[SWAP]
Simple
Workflow
Access
Protocol
home
page,
http://www.ics.uci.edu/~ietfswap/index.html
[Taylor 97] R. Taylor, Endeavors: “Useful Workflow Meets Powerful Process”, Information and
Computer Science Research Symposium, University of California at Irvine, February 1997. URL:
http://www.ics.uci.edu/endeavors/
[WfMC] Workflow Management Coalition Standards, http://www.aiim.org/wfmc/mainframe.htm
[Worah et.al. 97] D. Worah, A. Sheth, K. Kochut, J. Miller, "An Error Handling Framework for
the ORBWork Workflow Enactment Service of METEOR," Technical Report, LSDIS Lab. Dept.
of Computer Science, Univ. of Georgia, June 1997.
[Yong 98] J. Yong, "The Respository system of METEOR workflow management system",
Master Thesis, Department of Computer Science, University of Georgia, March 1998.
[Zheng 97] K. Zheng, “Designing Workflow Processes in METEOR”2 Workflow Management
System M.S. Thesis, LSDIS Lab, Computer Science Department, University of Georgia, June
1997.
32
Download