Yu-Kai Lin, Mingfeng Lin, Hsinchun Chen
University of Arizona yklin@email.arizona.edu, mingfeng@eller.arizona.edu, hchen@eller.arizona.edu
Electronic health record (EHR) system holds great promise in transforming healthcare. Existing empirical literature typically focused on its adoption , and found mixed evidence on whether EHR improves care.
The federal initiative for meaningful use (MU) of EHR aims to maximize the potential of quality improvement, yet there is little empirical study on the impact of the initiative and, more broadly, the relation between MU and quality of care. Leveraging features of the Medicare EHR Incentive Program for exogenous variations, we examine the impact of MU on healthcare quality. We found evidence that
MU significantly improves quality of care. More importantly, this effect is greater in historically disadvantaged hospitals such as small, non-teaching, or rural hospitals. These findings contribute not only to the literature on Health IT, but also the broader literature of IT adoption and the business impacts of IT as well.
Keywords : Meaningful use, MU, electronic health records, EHR, quality of care
This Version: November 24 th , 2014
1
Researchers, observers, patients and other stakeholders have long deplored the woeful conditions of the
U.S. healthcare system (Bentley et al. 2008; Bodenheimer 2005; IOM 2001). While the nation’s healthcare expenditure accounts for almost 18% of the gross domestic product (GDP), about 34% of national health expenditures (or $910 billion) are considered wasteful (Berwick and Hackbarth 2012).
Meanwhile, preventable medical errors cause nearly 100,000 deaths and cost $17.1 billion annually (IOM
2001; Bos et al. 2011). To address these issues, researchers and policy-makers have been advocating the adoption and use of Health Information Technology (HIT), with the hope that it will help transform and modernize healthcare (Agarwal et al. 2010).
An important element of the HIT initiatives is the implementation of Electronic Health Records
(EHR). A full-fledged EHR system contains not only patient data, but also several interconnected applications that facilitate daily clinical practice, including patient record management, clinical decision support, order entry, safety alert, health information exchange, among others. EHR with a clinical decision support system (CDSS) can implement screening, diagnostic and treatment recommendations from clinical guidelines so as to enable evidence-based medicine (Eddy 2005). Similarly, the functionality of computerized physician order entry (CPOE) in EHR can detect and reduce safety issues regarding over-dosing, medication allergy, and adverse drug interactions (Ransbotham and Overby 2010).
However, until recently most U.S. hospitals and office-based practices had been slow in adopting
EHR systems. Jha and colleagues (2009) reported that in a national survey, less than 10 percent of hospitals had an EHR system in 2009. Similarly, DesRoches et al. (2008) found a 17 percent adoption rate of EHR in office-based practices in early 2008. More importantly, studies have found mixed evidence on whether the adoption of EHR improves quality of care (Black et al. 2011; Himmelstein et al. 2010), which further casts doubt on the benefits of adopting EHR.
One potential explanation for the mixed effect from EHR adoption is that hospitals may not be actually taking advantage of EHR, even if the system has been installed (Devaraj and Kohli 2003).
2
Through the Health Information Technology for Economic and Clinical Health (HITECH) Act, the federal government has been taking steps to promote meaningful use (MU) of EHR to maximize the potential of quality improvement (Blumenthal 2010). The HITECH Act committed $29 billion dollars over 10 years to incentivize hospitals and clinical professionals to achieve the MU objectives of EHR
(Blumenthal 2011). Under this law, the Centers for Medicare & Medicaid Services (CMS) has been the executive agency for the incentive programs since 2011. Through these programs, eligible hospitals and professionals can receive incentive payments from Medicare, Medicaid or both if they successfully demonstrate MU. In addition, there will be financial penalties to the hospitals and professionals if they fail to meet the MU objectives by 2015; that is, they will not receive the full Medicare reimbursement from CMS. The programs designate multiple stages of MU, where each stage has incremental scopes of
MU objectives and measures.
1
With the implementation of these incentive programs, recent surveys show a significant growth of
EHR adoption and MU (Adler-Milstein et al. 2014). However, the ultimate goal of this national campaign is to improve the quality of care for patients (Blumenthal and Tavenner 2010; Classen and Bates 2011).
So far, however, there have been no empirical studies on the quality effect of MU. This study seeks to fulfill this gap in the literature by examining the relation between the MU of EHR technology and changes in hospital care quality.
One major challenge in identifying the effect of EHR or MU on quality of care is endogeneity.
The decision to adopt an EHR system is often correlated with hospital’s characteristics, some of which not observable to researchers. For example, small, nonteaching, and rural hospitals are slow in adopting
EHR (DesRoches et al. 2012), and they are also likely to perform worse than their counterparts.
Conversely, many better-performing healthcare institutions are also pioneers in EHR adoptions. Examples include Mayo Clinic, which introduced EHR as early as the 1970s; and Kaiser Permanente, which had invested about $4 billion on its EHR system before the EHR Incentive Programs (Snyder 2013). Such
1 http://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Stage_2.html
3
endogeneity in observational data could lead to erroneous inference on the effect of adoption or MU on quality of care.
We address this empirical challenge by exploiting some unique features of the Medicare EHR
Incentive Program. Specifically, under the guidelines of this program, hospitals that demonstrate and maintain MU are able to receive annual incentive payments up to four years, starting from the year they attest meeting the MU criteria.
2 We strategically identify treatment and control hospitals to study whether meaningful use of EHR technology improves quality of care. Through our identification strategy and multiple empirical specifications and robustness tests, we find supporting evidence that MU significantly improves quality.
In this section, we review the existing literature that directly inform our analyses. We start with a review and synthesis of existing studies that focus on the effects of Electronic Health Records, and highlight the gap that our study is seeking to fill. Since our focus is the Meaningful Use of EHR, in the second subsection we discuss a related, albeit smaller, literature on MU. We also briefly review some representative work from the broader literature of IT adoption and value that inform our study.
2.1 Literature on the Effects of EHR
Given the potential of EHR to change the routines of healthcare delivery, reduce costs, and minimize errors, there has been a large and growing literature on this topic. Most directly related to our study are the empirical ones. In this section, we systematically review published empirical studies on the effect of
EHR, and compare them with our study. We focus on studies published in and after 2010 to avoid significant overlapping with prior review papers (Black et al. 2011; Chaudhry et al. 2006). For each study,
2 The attestation procedure involves filling a formal form on the CMS EHR Incentive Programs Registration and Attestation website. Hospitals will need to report the vendor and product model of its EHR system and enter their measures regarding each of the required MU criteria. For more information about the registration and attestation procedure, see http://www.cms.gov/Regulations-and-
Guidance/Legislation/EHRIncentivePrograms/downloads/HospAttestationUserGuide.pdf
4
we summarize the main data sources, data period, data units, main dependent and independent variables, identification strategy, type of analysis, and main findings. The result of our literature search and analysis is shown in Table 1. To facilitate comparison, we list this study in the last row of the table. As can be seen, our study is one of the first to study the effect of MU on healthcare quality by using a more objective set of measurements for MU, a unique and recent dataset, and empirical identification methods that leverages features of the Medicare EHR Incentive Program. We discuss our findings from our literature search and categorization in the remaining of this subsection.
Main data sources; dependent and independent variables . It is apparent in Table 1 that
Healthcare Information and Management Systems Society’s Analytics Database (HADB) is a predominant data source for research on HIT or EHR. HADB contains information about the adoption of hundreds of HIT applications, including EHR, CPOE, CDSS, etc., in over four thousand U.S. hospitals.
HADS-based studies typically define and identify a set of HIT applications that are pertinent to the research goals. Most of the main independent variables in Table 1 are derived from HADB. Many studies further distinguish stages or capabilities of HIT or EHR implementation based on the adoption records in
HADB. For instance, in studying the effect of EHR adoption on quality of care, Jones et al (2010) determine EHR capability using four HIT applications, i.e., clinical data repository, electronic patient record, CDSS, and CPOE. A hospital is said to adopt “advanced EHR” if the hospital adopts all four applications, “basic EHR” if at least one, and “no EHR” if none. Dranove et al (2012) also distinguish basic and advanced EHR systems, but using a different set of applications as the criteria. Other than the distinction between basic and advanced EHR, a number of studies try to mimic the HITECH MU criteria by mapping them to similar applications in HADB (Appari et al. 2013; Hah and Bharadwaj 2012). We will address the issues of such mapping in Section 3.1 when we discuss our construction of the MU variable in this study. In addition to the fields provided by HADB, it is often necessary to include other data sources so as to identify hospital characteristics and performance. This includes American Hospital
Association (AHA) Annual Survey Database, CMS Case Reports (CMS-CR), and CMS Hospital
Compare (CMS-HC) database.
5
For dependent variables, the two most popular ones among past studies are hospital operational cost and process quality. The former is supplied by the CMS-CR data, while the latter is available from the CMS-HC database.
While process quality of care and MU have shown in a number of prior studies, the variables were typically constructed using the data from CMS-HC and HADB, respectively. We, instead, use two new data sources to construct the main dependent and independent variables. For our main dependent variable, process quality of care , we obtain healthcare performance measures from the Joint Commission
(JC). For our primary independent variable, MU , we use data from the Medicare EHR Incentive Program.
Following prior studies, we also use a number of other data sources to construct control variables. Section
3 provides an in-depth discussion of the datasets and variables in our study.
Data period . It is noteworthy from Table 1 that all the studies, even the most recent ones, are based on data from 2010 or earlier. According to Jha et al (2009), this is the time that the rate and degree of EHR use were both low in U.S. hospitals. Specifically, at this period comprehensive EHR system was used in only 1.5% of U.S. hospitals and just an additional 7.6% had a basic system. Since the U.S. healthcare system has undergone dramatic policy changes since 2009, there is a significant practical and scientific need for new data and new empirical analyses. To understand the impact of the HITECH Act and the latest progress of MU among U.S. hospitals, we use data from around 2012 for our analyses.
Identification strategy and analysis . As discussed earlier, an important empirical challenge in assessing the impact of EHR is endogeneity; we therefore review how prior literature addresses this concern. Column 7 of Table 1 summarizes research designs used in each paper to address endogeneity concerns. We can see that these studies employ various econometric strategies such as fixed effects
(Appari et al. 2013; Dranove et al. 2012; Miller and Tucker 2011; Furukawa et al. 2010; McCullough et al. 2010), difference-in-differences (McCullough et al. 2013; Jones et al. 2010), instrument variables
(Furukawa et al. 2010; Miller and Tucker 2011), and propensity adjustments (Dey et al. 2013; Appari et al. 2012; Jones et al. 2010). In addition, the majority of studies in Table 1 employed panel data analysis,
6
because cross-sectional datasets “will not capture the impact of IT adoption if early adopters differ from other hospitals along other quality-enhancing dimensions.” (Miller and Tucker 2011, p. 292)
Our analyses are based on a panel dataset. We exploit features of the Medicare EHR Incentive
Programs as an exogenous variation, and adopt a number of empirical strategies (difference-in-differences and first-difference) to verify and ensure that our findings are robust. We also use propensity score matching (Rosenbaum and Rubin 1983) to alleviate potential bias from treatment selection, i.e., whether or not a hospital demonstrates MU in 2012. Full details of our identification strategy is discussed in
Section 4.
Main findings . The last column of Table 1 makes it clear the inconclusive findings on the effect of EHR adoption in prior studies: 6 positive, 4 negative, and 4 mixed results. The 4 mixed results are either because EHR had effect on only a subset of measures or because the effects are significant only under certain conditions. For instance, McCullough et al. (2010) find that the use of EHR and CPOE significantly improved the use of vaccination and appropriate antibiotic in pneumonia patients, but for the same population EHR had no effects on increasing smoking cessation advice nor taking blood culture before antibiotic. Similarly, Furukawa (2011) find that advanced EHR systems significantly improved the throughput of emergency department, but basic EHR systems did not.
Our results from various estimators consistently show that attaining MU significantly improves quality of care. Moreover, we also find that the magnitude of this effect varies by several hospital characteristics, such as hospital size, hospital ownership, geographical region, and urban status of the hospital location. We find that hospitals traditionally deemed with weaker quality, e.g., s e.g., small, nonteaching or rural hospitals, attained larger quality improvements than their counterparts.
2.2 Literature on Meaningful Use
Compared to adoption, meaningful use of EHR is a much more challenging goal for hospitals and healthcare providers (Classen and Bates 2011). Some prior studies in Table 1 have examined “MU,” but use less formal measurements to identify MU. For example, Appari et al. (2013) examine how MU
7
impacts process quality by mapping the HITECH MU criteria to HADB. The authors define five levels of
EHR capabilities in which the top two levels satisfy the functionality requirements in the MU criteria. The authors note that (Appari et al. 2013, p. 358):
“While complete satisfaction of 2011 MU objectives requires fulfilling clinical and administrative activities using EHR systems, here we measure only whether a hospital system has the functional capabilities to meet the objectives as we have no data on whether they actually accomplished the activities.”
Several other studies also use MU functionalities to define MU (e.g., McCullough et al. 2013;
Hah and Bharadwaj 2012). A recent systematic review by Jones et al. (2014) focuses on the effects of MU on three outcomes: quality, safety and efficiency. The review includes a total of 236 studies published from January 2010 to August 2013. The review also uses MU functionalities to as a taxonomy to characterize the literature. Jones et al. (2014) conclude that most of the studies focused on evaluating
CDSS and CPOE, and rarely addressed the other MU functionalities. By contrast, as we will discuss in
Section 3.1, our paper is one of the first to use a systematic and government-mandated public health program to identify MU.
Finally and more broadly, our study also draws on and contributes to the long-standing literature on the consequences of technology adoption and IT business value, of which healthcare IT is but one example (Davis et al. 1989; Ajzen 1991; Attewell 1992; Brynjolfsson and Hitt 1996; Wejnert 2002;
Venkatesh et al. 2003; Tambe and Hitt 2012). Whereas a dominant variable of interest in this literature is the adoption of technologies, we focus on the meaningful use of a technology and investigate how it affects an important outcome.
We integrate data from multiple sources. Consistent with prior research (Appari et al. 2013), we use the
Medicare provider number as a common identifier to link all hospital-level information. Table 2
8
summarizes the variables and their data sources, which we discuss in turn. Also consistent with prior studies in related literature (see Section 2; e.g., Appari et al. 2013; McCullough et al. 2013; Furukawa et al. 2010; Jones et al. 2012), we investigate non-federal acute care hospitals in 50 states and the District of
Columbia.
3.1 Meaningful Use
An important difference between our study and those reviewed in Table 1 is how we construct our main independent variable, MU , and its data period. We use data directly from the Medicare EHR Incentive
Program. This dataset is new and unique, and to the best of our knowledge, has not been used in any prior empirical studies. The latest data, released in May 2014, covers the MU attestation records of U.S. hospitals as of early 2014. This dataset reflects the most recent development of EHR adoption and meaningful use in the United States.
The CMS EHR Incentive Programs website provides data about the programs and the recipients of the incentive.
3 The recipient data reveals in which year a hospital demonstrated that it had met the MU criteria. We look only at the records from the Medicare EHR Incentive Program but not from the
Medicaid program for two reasons. The first reason is data availability. As of the time we conducted this study, hospital-level information from the Medicaid EHR Incentive Program has not been released. This is presumably due to the fact that the Medicaid program is locally run by each state agency, which creates difficulties in aggregating detailed information from multiple sources. In contrast, the Medicare Incentive
Program is run solely by CMS so that information is centralized and more accessible. The second reason that we use only the Medicare data is its representativeness. The latest statistics from CMS shows that
96.7 percent hospitals which successfully attested MU before January 2014 received incentive payments from Medicare, and 94.1 percent of these hospitals also received payments from Medicaid. Since most hospitals register and obtain incentive payments from both programs, the hospitals in the Medicare
3 http://www.cms.gov/EHRIncentivePrograms/
9
program should mostly overlap with those in the Medicaid program. We therefore focus on the data from the Medicare program to study the quality effect of MU.
4
Using data directly from the Medicare EHR Incentive Program allows us to mitigate three important shortcomings in prior works that rely on HADB or AHA Healthcare IT Database. First, instead of mapping the MU criteria to the records of HIT applications in HADB, our approach is much more direct and objective. There will be no information loss or measurement errors from indirectly representing
MU using secondary data sources. Second, existing HIT survey databases rely on self-reported data. To our knowledge there is no data auditing process to verify the correctness of the data. In contrast, there are pre- and post-payment audits in the Medicare EHR Incentive Program to ensure the accuracy of the attainment of MU objectives, and hence the integrity of data. Third and finally, the MU criteria comprise not only what EHR functionalities a hospital possesses but also how they are used. The simple presence of HIT in a hospital does not directly imply meaningful use of the technology (McCullough et al. 2013).
As such, instead of just requiring having an EHR functionality to record patient’s problem list, the guidelines of Medicare EHR Incentive Program require the following criterion, among others, to be met before the hospital can be considered MU:
“More than 80% of all unique patients admitted to the eligible hospital or critical access hospital have at least one entry or an indication that no problems are known for the patient recorded as structured data.” (Emphasis added) 5
Demonstration of actual use of EHR and HIT capabilities is critical in understanding and explaining the impact of HIT or EHR (Devaraj and Kohli 2003; Kane and Alavi 2008). However, proof or demonstration of use is typically not recorded in existing HIT survey databases, and hence imposed a
4 Although the Medicare patient population is elder than the regular patient population, it has no effect on our study. This is because we are looking at Medicare certified providers instead of the Medicare patients. Given that
Medicare is the largest payer in the US, almost all hospitals, especially acute care hospitals that we are studying, accept Medicare patients.
5 http://cms.gov/Regulations-and-
Guidance/Legislation/EHRIncentivePrograms/Downloads/MU_Stage1_ReqOverview.pdf
10
critical research limitation in HIT evaluation. Since system usage is an integral part in determining whether MU is met in the Medicare EHR Incentive Program, our MU variable naturally captures this missing dimension. These three important shortcomings were often neglected in prior studies, and can potentially explain the mixed findings on the effect of EHR systems in the literature (Agarwal et al. 2010;
Kohli and Devaraj 2003). The newly released data from the Medicare EHR incentive program allows us to circumvent these issues, and provide new empirical evidence on the effect of MU on quality of care.
3.2 Quality of Care
There are several sources which provide hospital quality data, including CMS-HC, JC, National
Ambulatory Medical Care Survey (NAMCS), and National Hospital Ambulatory Medical Care Survey
(NHAMCS). The first two put more emphasis on inpatient settings whereas the last two, outpatient (Ma and Stafford 2005). Nonetheless, all these data sources emphasize evidence-based care process (Chassin et al. 2010) when deriving quality measures for hospitals. In other words, quality of care is only considered high if a hospital follows the processes and interventions that will lead to improved outcomes, as suggested by clinical evidence. It is noteworthy that the relationships among different quality metrics
(e.g. process quality, patient satisfaction, 30-day readmission rate, and in-hospital mortality) are weak or inconsistent (Shwartz et al. 2011; Jha et al. 2007). For instance, in a prospective cohort study with a nationally representative sample (N=51,946; panel data), Fenton et al. (2012) find that higher patient satisfaction was surprisingly associated with greater total expenditures and higher mortality rate (both significant at the 0.05 level). Although the process quality does not necessarily reduce 30-day mortality or readmission, it has been a primary quality metric used in prior studies (see Table 1) because it is actionable, targets long-term benefits, and requires less risk-adjustment (Rubin et al. 2001). By the same token, Chatterjee and Joynt (2014) argue that:
“Although process measures remain minimally correlated with outcomes and may represent clinical concepts that are somewhat inaccessible to patients, they do have
11
independent value as a marker of a hospital’s ability to provide widely accepted, guideline-based clinical care.”
We obtain hospital quality measures from the Joint Commission, formerly known as the Joint
Commission on Accreditation of Healthcare Organizations. The Joint Commission is a not-for-profit organization that aims to promote care quality and safety. It is critical for a hospital to be accredited by the Joint Commission in order to obtain a service license and to qualify as a Medicare certified provider
(Brennan 1998). The Joint Commission has long been developing metrics for quality measurement and improvement.
6 There are currently 10 core measure sets, categorized by conditions such as heart attack, heart failure, pneumonia, surgical care infection prevention, among others. In each core measure set, there are a number of measures specific to the corresponding medical condition. Examples of the quality measures are as follows:
•
Percentage of acute myocardial infarction patients with beta-blocker prescribed at discharge
•
Percentage of heart failure patients with discharge instructions
•
Percentage of adult smoking cessation advice/counseling
These metrics are largely aligned with the process quality measures in the CMS-HC dataset that are more commonly used in prior research (see Table 1). We find that the Joint Commission quality measures are more comprehensive than the process measures in CMS-HC, since many quality measures are tracked by the former but not by the later. In addition to their comprehensiveness, quality measures from the Joint Commission are updated quarterly, whereas those from CMS-HC are updated only annually. We therefore use the quality measures from the Joint Commission.
7
Since the JC quality metric has multiple core measure sets, and each core measure set can contain multiple specific measures, we derive a composite quality score to represent the overall process quality of
6 http://www.jointcommission.org/performance_measurement.aspx
7 In fact, our identification strategy would not have been possible without quarterly quality data. As of the time of writing, the latest process-of-care quality metric and patient experience metric in the CMS-HC dataset are available for April 2012 to March 2013. Similarly, the outcome-of-care quality metrics in CMS-HC, i.e., 30-day readmission/mortality rates, are available from July 2010 to June 2012. As such, none of these quality metrics is recent enough to permit a clean empirical identification. See Section 4.2 for the identification strategy in this study.
12
a hospital. The interpretation of the composite score is intuitive, useful and quantitative: to what degree
(in percentage) does a hospital follows guideline recommendations. Consistent with prior studies (Appari et al. 2013; Chen et al. 2010), the composite quality score is derived as an average of all specific measures weighted by the number of eligible patients in each measurement. In the construction of quality score, we exclude measures that have less than five eligible samples in a hospital in order to ensure data reliability.
8
3.3 Control Variables
Prior studies on the effect of EHR typically include a set of control variables to capture the heterogeneity among hospitals (Angst et al. 2010; Appari et al. 2013; Devaraj and Kohli 2003). The HADB contains hospital information about the year formed, which allows us to calculate hospital age as of 2012. For hospitals whose age information is missing, we manually performed search engine queries and successfully identified about 50 of them by looking up the “About Us” or similar pages on hospital websites.
A second important control is hospital size . The size of hospitals has been shown to be positively correlated with EHR adoption as well as quality of care (DesRoches et al. 2012). We operationalize hospital size using the total number of beds in the CMS-CR data. We also use the CMS-CR data to capture hospital various throughput measures: annual (Medicare) discharges or inpatient days (Miller and
Tucker 2011). To allow more intuitive interpretations on the effects of these throughput measures, we rescale the original values to the unit of a thousand before entering them in our models.
To differentiate the general health condition of the served patient population, we use the transferadjusted case mix index (TACMI) from the CMS Inpatient Prospective Payment System (IPPS). The
CMS-HC dataset provide information regarding the ownership of a hospital, which can be broadly categorized into either government, non-profit, or proprietary hospitals.
We control for the teaching status of hospitals. Specifically, teaching status is a dichotomized variable determined by whether the hospital is a member of the Association of American Medical
8 Our results are qualitatively similar without imposing this cut off threshold.
13
Colleges. We further identify whether or not a hospital is located in a rural area by mapping its zip code to the Rural-Urban Commuting Area (RUCA) version 2.0 taxonomy (Angst et al. 2012; Hall et al. 2006).
Finally, we control for the region of a hospital by matching its zip code to one of the four census regions:
Midwest, Northeast, South, or West.
As discussed earlier, a key challenge to identifying the quality impact of EHR adoption or MU is endogeneity. This section describes how we address endogeneity concerns to approximate a randomized experiment from observation data, in order to learn about causal relationships (Angrist and Krueger
1999). We begin with a brief overview of the Medicare EHR Incentive Program, followed by our identification strategy motivated by some unique features of this program.
4.1 Medicare EHR Incentive Program for the Eligible Hospitals
Under the auspices of HITECH legislation, the goal of the CMS EHR Incentive Programs is to promote meaningful use of EHR through financial incentives. To receive the incentive payment, a hospital must achieve 14 core objectives as well as 5 out of 10 menu objectives (Table 3), each accompanied with a very specific measure (Blumenthal and Tavenner 2010). Since the inception of the programs in 2011, thousands of hospitals have achieved the MU objectives. Full detail of the programs is available through the programs website (Footnote 3), but here, we highlight its incentive features that help identify the effect of meaningful use on quality of care.
Hospitals must demonstrate MU by 2015 at the latest, or they will be financially penalized. If they demonstrate MU before 2015, they will receive annual incentive payments from the Medicare EHR
Incentive Program. The amount of these annual payments is determined by multiplying the following three factors: initial amount, Medicare share, and transfer factor. The initial amount is at least $2 million, and more if the hospital discharges a specified number of patients. The Medicare share is, roughly, the fraction of Medicare inpatient-bed-days in total inpatient-bed-days in the hospital’s fiscal year. Most
14
important, the transfer factor varies by payment year and the time that the hospital demonstrates MU
(Table 4). A hospital can receive these annual incentive payments up to four years, with the amount decreasing each year by the transfer factor. If a hospital demonstrates MU in 2013 or earlier, it can receive 4 years of payments with the transfer factor decreases each year from 1, 0.75, 0.5, to 0.25. If a hospital demonstrates MU in 2014, it can only receive 3 years of payments with the transfer factor decreases each year from 0.75, 0.5, to 0.25. Starting from 2015, hospitals that are not meaningfully using
EHR technology will be penalized by a mandated Medicare payment adjustment, in which they will not receive the full amount of Medicare reimbursements. The degree of payment adjustment will double in
2016 and triple in 2017. The transfer factor and the cumulative penalties, therefore, incentivize hospitals that did not meet the criteria of MU to adopt and meaningfully use EHR sooner rather than later.
9
4.2 Identification Strategy
Although the financial incentive is the same from an eligible hospital to enter the program and attest MU during 2011 and 2013 (i.e., 4 years of payments), we assume that hospitals would proceed to attest achieving MU and obtain the incentive payments once they have met the criteria. This assumption is consistent with a basic premise in the accounting and finance literature: income or earnings in the present is generally preferable to earnings in the future (Feltham and Ohlson 1999; Ohlson 1995), especially given the low cost of the attestation procedure. This is also a realistic assumption given the financial burden to hospitals of acquiring and implementing an EHR system and the financial subsidies available for achievement of MU. Therefore, for the hospitals that began to attest MU in 2012, we assume that they did not meet the MU criteria in 2011 or earlier, but did achieve MU in 2012 and later.
9 Payments from the Medicare EHR Incentive Program represent a nontrivial amount of incoming cash flow for the hospitals. From our data, we see that in the first three years of the program, the median annual payment to hospitals is $1.4 million (with the highest being $7.2 million). To put this number in context: one source
(http://www.beckershospitalreview.com/finance/13-statistics-on-hospital-profit-and-revenue-in-2011.html) estimates that the average profit per hospital in 2011 is around $10.7 million. Alternatively, data provided by HADB show that in 2011, the median difference between revenue and operation cost is slightly below $1 million. In either case, the payment incentive from the Medicare EHR Incentive Program is substantial.
15
For the purposes of identification and estimation, we consider MU as a dichotomous status regarding whether or not a hospital reaches the MU regulation criteria. To identify the quality effect of
MU, we obtain longitudinal MU attainment records from the Medicare EHR Incentive Program, which provides data from 2011 to early 2014 (Section 3.1). We construct a panel dataset from this and other data sources, and employ a difference-in-differences (DID) identification strategy to tease out the quality effect of attaining MU. We define our treatment group as the hospitals that attained MU in 2012, but not before. A key and novel component in our identification strategy is that we consider two control groups: hospitals that attained MU in 2011 at the onset of the EHR Incentive Program (henceforth denoted by
AlwaysMU ) and hospitals that had not yet achieved MU by the end of 2012 (henceforth denoted by
NeverMU ). The AlwaysMU control group is comprised of hospitals that had reached MU prior to the implementation of the incentive program, therefore the incentive program had little or no impact on their
MU status. Using these hospitals as a comparison group allows to estimate the effect of MU on quality of care for hospitals that sped up their process of reaching the MU status due to the incentive program.
10 On the other hand, the NeverMU control group includes hospitals that have not yet reached the MU status as of the end of 2012. Since these hospitals are likely to be in the process of speeding up their progress toward MU, using them as an alternative control group provides a more conservative (less optimistic) estimate of the effect on quality of care. In other words, although hospital’s decisions on expanding resources to reach the MU status may be endogenous, these two distinct but complementary control groups allow us to obtain a robust upper and lower bounds for the unbiased MU effect: for the
10 One may argue that hospitals in the AlwaysMU group may also have responded to the legislation and sped up their MU status. While plausible, this is unlikely to be a first-order issue due to the limited amount of time between the laws and the time that we study, and the length of time for hospitals to implement EHR and obtain MU.
HITECH was passed by the congress in 2009, but the detailed mandates in the Incentive Program were not announced until August 2010. If a hospital did not have EHR at that time, it would have taken about two years to implement it (Miller and Tucker 2011). If the hospital already had EHR, it would have taken about another 3 years
(median) to move from adoption to MU. These numbers were obtained by following the approach in Appari et al.
(2013): for each hospital in the treatment group, we use HADB (2006-2011) to identify the time difference between implementation of all MU functionalities and attestation of MU. While this calculation is only an approximation, these numbers suggest that hospitals in AlwaysMU can be reasonably expected to have reached MU prior to the announcement of the incentive program. Further support of this argument can be seen in Figure 1 later in the paper: only about 18% of the hospitals demonstrated MU in 2011.
16
unobservable or omitted variables that may confound our MU estimate, their mean effect is likely to be monotonic with the timing of EHR adoption and MU. Using two control groups therefore provides the upper and lower bounds of the estimate.
11
More specifically, for each hospital in the treatment and control groups we construct a two-period panel with a pre-treatment period quality score taken from the fourth quarter of 2011 and a post-treatment period quality score taken from the first quarter of 2013. With this panel data set-up, the average treatment effect of attaining MU is the difference between the pre-post, within-subjects differences of the treatment and control groups. We estimate the following model:
it
= β
0
+ β
1
i
β
3
+ β
2
t
+
(
i
×
t
) + +
it
′ δ +
it ,
(1)
Subscripts i (= 1…N) and t (= 1 or 2) index individual hospitals and time periods, respectively.
Quality it
represents the quality score of hospital i at time t . TreatmentGroup i
and PostPeriod t
are indicators for the treatment group and the post-period respectively. TreatmentGroup i
is 1 if hospital i is in the treatment group; 0 otherwise. PostPeriod t
is 1 if time t=2 , i.e., the post-treatment period; and 0 if time t=1 , i.e., the pre-treatment period. Parameter c i
absorbs hospital-level, time-invariant unobserved effects.
X it
is a vector of control variables that we introduced in Section 3.3. Finally, ε it
are the idiosyncratic errors which change across i and t .
Model (1) can be estimated by either the fixed effects estimator or by the random effects estimator (Wooldridge 2002). While we conduct and report both, we note that in a two-period panel, a simple yet effective way to estimate fixed effects models is through a first-differencing transformation:
∆
it
=
0
+ ∆ (
i
×
t
) + ∆
it
′
+ ∆
it
(2) where Δ Quality it
= Quality i2
– Quality i1
, Δ X it
= X i2
– X i1
, and Δ u it
= u i2
– u i1
. Since the hospital-level fixed effects, i.e., c i
, is assumed to be time invariant, they cancel out after first differencing. The first-difference
11 We also considered using a traditional instrument variable approach where the instruments for MU status is the MU saturation rate (percentage of hospitals that had reached MU status prior to a focal hospital, within a 25 mile radius of the hospital), since hospitals are more likely to reach MU status due to competitive forces. We obtained qualitatively similar results.
17
(FD) model yields an identical estimates as the fixed effects model, but is easier to implement. Therefore, we will proceed with our analyses using the random-effects DID model and the FD model. With the two empirical models, the main interest of our analyses is the estimates of TreatmentGroup i
× PostPeriod t
. A positive and significant estimate will support the hypothesis that meaningful use of EHR improves hospitals’ quality of care.
4.3 Validating the DID Identification Assumption
A critical assumption in the DID identification strategy is the parallel historical trends of the dependent variable between treatment and control groups (Bertrand et al. 2004). That is, absent the treatment (in our case, the implementation of the incentive program, the treatment and control groups should demonstrate similar trends over time in terms of the outcome variable. This assumption is not trivial since the three hospital groups may present distinct characteristics. Since the historical JC hospital quality data are not publicly available, we use the quality measures from the CMS-HC dataset as a proxy. As mentioned in
Section 3.2, the quality measures in JC and CMS-HC are largely aligned, but the former is updated quarterly, whereas those in the later is updated annually. Figure 1(a) shows the annual quality trends of the treatment and control groups from 2010 to 2012. While there are no signs that the DID assumption is violated for the NeverMU control group, the historical quality trends between treatment group and control group AlwaysMU are not perfectly parallel (albeit only slightly).
We address this issue by using propensity score matching (Rosenbaum and Rubin 1983; Xue et al. 2011; Brynjolfsson et al. 2011; Mithas and Krishnan 2009). Propensity score matching matches observations in treatment and control groups, according to their propensities in receiving the treatment as a function of their observable traits. In our context, for each hospital in the treatment group, propensity score matching identifies the most comparable hospital in the control group. This helps excluding hospitals that are vastly different in their unobservable qualities from the treatment group. For the matching process, in addition to a broad set hospital characteristics, we also include quality changes from
2010 to 2011 and from 2011 to 2012 when calculating propensity scores. Figure 1(b) shows that after
18
matching, the three hospitals groups present parallel historical quality trends. We then conduct DID between the treatment group and the subset of control group hospitals. We report results without and with the matching procedure.
The data from the Medicare EHR Incentive Program show a significant uptake of EHR and MU among acute care hospitals from 2011 to 2013 (Figure 2). In 2011, the average rate of MU attainment was 18% across states. The percentage increased to 54% in 2012 and 86% in 2013. As of November 2014, 4,283 hospitals have achieved the Stage 1 MU. These statistics alleviate the concern that only “good” hospitals participate in the program, and at the same time, suggest that the incentives are so substantial that EHR adoption and MU in US acute care hospitals was accelerated from 18% to 86% in just two years.
Table 5 shows some key summary statistics of our dataset. There are 2,344 hospitals in our dataset, in which 914 belong to the treatment group, 483 the AlwaysMU control group, and 947 the
NeverMU control group.
12 Table 5 uncovers some other interesting patterns across these groups. When compared to hospitals in the AlwaysMU control group, hospitals in the treatment group had significantly lower case mix, and were more likely to locate in rural areas. When compared to those in the NeverMU control group, treatment hospitals had significantly higher throughputs, measured by both the number of inpatient days as well as the number of discharges. These differences could simultaneously correlate with the quality of care and treatment assignment (i.e., acquiring MU in 2012). Therefore, as a robustness check we employ propensity score matching to identify a subset of hospitals within the respective control group that are as similar as empirically possible to those in the treatment group. Additional robustness checks include quantile analyses and censored regression analyses. To reveal greater policy and managerial insights, we a) construct a continuous MU variable and investigate the relation between the
12 Based on the CMS-HC dataset, there were 4,860 hospitals in the US by the end of 2011. Among them, 3,459 were acute care hospitals. We excluded federal, tribal and physician-owned hospitals (n=231) and hospitals that were located outside 50 states and DC, e.g., Guam, Virgin Islands, etc. (n=55). Finally, hospitals with missing values in any variables of our models were deleted listwise.
19
degree of MU and the degree of quality improvement, and b) conduct a stratification analysis to reveal the potentially heterogeneous effect of MU among different types of hospitals.
5.1 Main Results
Figure 3 presents the mean quality changes among the three hospital groups from the pre-treatment period to the post-treatment period. We can see that compared to either of the two control groups, the treatment group exhibits significantly greater quality improvement from the pre-treatment period to the posttreatment period. To further examine the effect, Table 6 summarizes our estimations across eight different model setups, based on the choice of estimator, the choice of control group, and whether or not to include control variables. These results consistently show that meaningful use of EHR has a significant and positive effect on quality of care. We note that the random-effects DID estimator and the FD estimator yield highly consistent estimates on the quality effect of MU. The quality effect of MU ranges roughly between 0.32 and 0.47 across different models. The incremental gain is consistent with findings in Appari et al. (2013).
To better understand the size of this effect, we illustrate it in the context of an important indicator of care quality: hospital readmission. Readmission is an important problem in healthcare because it signifies poor quality of care and generates very high costs (Jencks et al. 2009; Bardhan et al. 2011). The
CMS-HC dataset shows an average 30-day hospital-wide readmission rates of 16% (approximately one in six) in acute care hospitals at the end of 2011. Our data show that a 0.4 quality improvement can roughly translate to 0.14% reduction in readmission rate.
13 With over 20 million annual inpatient discharges in
U.S. hospitals and the estimate cost of $7,400 per readmission (Friedman and Basu 2004), the 0.14%
13 Prior studies show that the relation between process quality and readmission rate is insignificant (Shwartz et al. 2011). One potential explanation is that hospitals with high process quality are more likely to attract complex patient cases, which then incurs a higher readmission rate. We use TACMI to address this issue, which is an index used to describe the complexity of a hospital’s overall patient base. Our derivation is as follows. We categorize hospitals by whether their TACMI values are above or below the median TACMI value. The mean quality scores for the high and low TACMI groups are 98.08% and 96.94%, respectively, in the pre-treatment period. In the same period, the high TACMI group had a mean 30-day hospital-wide readmission rates of 15.79% and low TACMI hospitals 16.20%. Based on the above quality-readmission relationship, 0.4% quality improvement from MU can be translated to 0.14% reduction in readmission rate [-0.14 ≈ 0.4 × (15.79−16.20) / (98.08−96.94)].
20
reduction in readmission rate represents up to 28,000 fewer readmission cases and up to $207.2 million cost savings per year. While the magnitude of this effect may not seem striking when compared to the overall healthcare expenditure, it is still not trivial. More importantly, it indicates that the effect of MU on healthcare quality improvement is indeed in the right direction, even for the first stage of MU over the short period of time for which we have data.
5.2 Propensity Score Matching
We conduct propensity score matching to address the issues that there are significant differences in observable hospital characteristics among hospital groups and that historical quality trends of the three hospital groups are not perfectly parallel. We use the Matching package in R, which optimizes covariate balance through a genetic search algorithm (Sekhon 2011). Each treatment hospital is matched with three hospitals in the AlwaysMU (and subsequently NeverMU ) control group. We then apply the empirical models on the matched data to examine the robustness of the prior findings, so as to derive a more conservative estimate of the impact of MU against these two control groups. Table 7 shows the results from the matched samples. Across different models, the estimates of TreatmentGroup i
× PostPeriod t remain positive and significant, suggesting the robustness of the MU effect in our main results.
5.3 Other Robustness Checks
Quantile Analysis
Prior research has shown that the ceiling effect of healthcare quality can be an important issue in health IT research because it affects our interpretation on the effect of health IT (Jones et al. 2010). As pointed out by Jones et al. (2010), one unit improvement in quality score from 95% to 96% is considerably more difficult than one unit improvement below that level, say from 70% to 71%. Quantile regression estimates the treatment effect at different quantile, specifically at the median, so as to overcome the impacts from ceiling effect (Koenker and Bassett 1978). It, hence, can be considered a robust check for our earlier main results. Table 8 presents the results from the quantile analysis using the
21
DID specification. When using AlwaysMU as the control group, the effect is significant at the median.
When using NeverMU as the control group, the effect is significant at the median and the lower quartile.
Table 9 further presents the results using the FD specification. Note that due to the first differencing transformation, the lower quantiles show a greater MU effect than the upper quantiles. Across all these specifications however, we see that MU has a positive and statistically significant impact on quality of care at the median.
Censored Regression Analysis
Another empirical issue is data censoring; that is, our quality metric is strictly bounded in 0 and
100. We found that a number of hospitals attain the maximum quality score. Specifically, in the pretreatment period, 37 treatment hospitals, 22 AlwaysMU hospitals, and 35 NeverMU hospitals have the maximum quality score (100). In the post-treatment period, the numbers are 60, 47, and 48, respectively.
These top-censored observations may lead to bias in our OLS estimations. To address this, we use a Tobit model to estimate the effect of MU under the DID specification (Tobin 1958). From Table 10 we find that the effect of MU remains positively significant with a set of similar coefficients comparing to the main results. This suggests that the censored observations do not have a strong impact on our prior estimations.
5.4 Continuous MU Variable
All the analyses we presented so far consider MU as a dichotomous status: hospital either obtained MU or not obtained MU. Although this is true from the perspective of the Medicare EHR Incentive Program, and indeed provided a useful metric and specific goal, it is intuitive to ask if a greater degree of MU could lead to a greater degree of quality improvement, provided that the hospital has reached the minimum requirements specified by the MU regulation. To answer this question, we look into the treatment group and the AlwaysMU control group. These hospitals had achieved MU, and the data from the Medicare
EHR Incentive Program contains information about these hospitals’ performances on the core and the menu MU objectives (see Section 4.1). We examine different ways to construct the continuous MU variable. We first consider two scenarios: one with only the core measures and the other with both the
22
core and the menu measures. For each scenario, we then calculate the average as well as the product of the measures. Table 11 shows the results from our analysis of the continuous MU variable. Our estimation indicates that a higher degree of MU indeed provides a greater improvement in quality of care.
This finding is consistent with the early work by Devaraj and Kohli (2003) and reaffirms the importance of measuring and factoring “actual use” in studying the business impacts of IT.
5.5 MU Effects and Hospital Characteristics: A Stratification Analysis
Our main results in Table 6 showed that MU has positive and significant effect on quality of care.
However, this effect may be heterogeneous across different hospitals. To draw proper policy implications from our analyses, we investigate how hospital and environmental characteristics influence the effect of
MU through a stratification analysis. We consider a number of characteristics, including hospital size, ownership, teaching status, region, and urban status.
We conduct a stratification analysis to tease out the quality effect of MU for various subsamples
(strata) of hospitals. In each stratum, we estimate the FD model using only treatment and control hospitals in that stratum. As an example, in the small size stratum we estimate the model using data only from hospitals with less than 100 beds in both the treatment and control groups. As another example, in the government ownership stratum we focus only on hospitals owned by government agencies. We choose the FD model instead of the DID model because in some strata, there is no variation on certain timeconstant control variables. For instance, in the AlwaysMU control group, the small size stratum has no teaching hospitals, rendering the DID estimation impossible. The FD estimation, on the other hand, does not have this problem since the time-independent control variables have no impact on the estimation.
Table 12 and Figure 4 show the results from our stratification analysis. The results are interesting and have several important policy implications. We find that hospitals traditionally deemed to have lower quality, such as small, non-teaching, and rural hospitals can in fact attain greater quality improvement from meaningful use of EHR than other hospitals. Specifically, the effect of MU in small hospitals is over four times more than the MU effect in large hospitals (0.98 vs. 0.23, model 2 in Figure 4). Similarly, the
23
effect in rural hospitals is eight times more than the MU effect in urban hospitals (1.16 vs. 0.13, model 2 in Figure 4). It is noteworthy that these disadvantaged hospitals were also the ones that had been shown to be slower in adopting EHR (Jha et al. 2009; DesRoches et al. 2012). These results suggest that the
Medicare EHR Incentive Program not only accelerated the overall adoption and MU of EHR technology in general, but more importantly, it significantly enhanced the quality for disadvantaged hospitals that are in greater needs for better care. In other words, MU of EHR can potentially be an effective approach to mitigate healthcare disparity.
The goal of this study is to investigate the relationship between hospitals’ meaningful use (MU) of electronic health records (EHR) and quality of care. Through multiple empirical specifications and numerous robustness checks, we find that meaningful use of EHR significantly improved quality of care.
More importantly, disadvantaged (small, non-teaching, or rural) hospitals tend to attain a greater degree of quality improvement from MU.
The results from this study are important for three reasons. First, while there have been multiple studies on the beneficial effects on quality of care resulting from implementation of EHR or MU, to the best of our knowledge, this study is the first one that has a formal, objective measurement of MU.
Second, we are one of the first to leverage the Medicare EHR Incentive Program for exogenous variations in identifying the clinical impact of MU. Our findings provide strong empirical evidence on the positive quality impact from meaningfully use EHR technology. Third, from a policy evaluation perspective, the findings justify and support the effectiveness of the Medicare EHR Incentive Program and the goal of the
HITECH Act. As the federal initiative begins to move toward the Stage 2 MU, 14 this study gives an early assessment of the clinical benefit and policy implications of the MU initiative.
14 Stage 2 MU requires a greater degree of system usage, consolidates a number of Stage 1 MU measures, and introduces a few new measures. A comprehensive comparison of Stage 1 and Stage 2 MU objectives is available at http://cms.gov/Regulations-and-
Guidance/Legislation/EHRIncentivePrograms/Downloads/Stage1vsStage2CompTablesforHospitals.pdf
24
Some limitations of our study point to several directions for future research. First, there are multiple stages of MU that the Medicare EHR Incentive Program intends to implement. In this study we only considered the Stage 1 MU, which comprises basic but essential objectives for meaningful use of
EHR. Second, there have been discussions on the limitation of existing quality measures (Kern et al.
2009; Jones et al. 2010). While we found the effect of MU on the existing quality measures, it would have been ideal if richer and more accurate measures were available.
Despite these limitations, replicating our empirical framework to higher stages of MU or better measures of healthcare quality, are all straightforward. Most importantly, our study represents an important first step in understanding the effect of not just adoption, but meaningful use of EHR technology on quality of care. Finally, and more broadly, by moving beyond adoption and focusing on the meaningful use of IT, our study also contributes to the long and growing literature in information systems on the adoption and value of information technologies (Brynjolfsson and Hitt 1996; Banker and
Kauffman 2004) by examining a different but socially important outcome metric: quality of healthcare services.
25
Table 1: Summary of prior studies on the effects of EHR
Study
Agha (2014)
Appari et al.
(2013)
Dey et al.
(2013)
McCullough et al. (2013)
Appari et al.
(2012)
Dranove et al.
(2012)
Hah and
Bharadwaj
(2012)
Furukawa
(2011)
Miller and
Tucker
(2011)
Main Data
Sources
AHA, HADB,
MC
CMS-HC,
HADB
CMS-CR,
HADB
AHA, HADB,
MC
CMS-IPPS,
CMS-HC,
HADB
AHA, CMS-
CR, HADB
1998-
2005
2002-
2007
AHA, HADB 2008-
NHAMCS
CDC-VSCP,
HADB
Data
Period
2006-
2010
NA
2009
1996-
2009
2010
2006
1995-
2006
†
Data Units
3,880 hospitals
3,921 hospitals
1,011 hospitals
2,953 hospitals
2,603 hospitals
4,231 hospitals
2,557 hospitals
364 EDs
3,764 hospitals
Main Dependent
Variables
Hospital saving and quality
Process quality
Operational performance
Patient outcome
(mortality)
Medication administration quality
Hospital operating costs
Hospital operation and financial performance
ED throughput
Neonatal mortality
Main
Independent
Variables
Use of HIT (EMR or CDS)
Identification
FE strategy
Analysis
PDA
EMR capability
EHR capability
Use of EHR and
CPOE
Use of CPOE and eMAR
EHR adoption
HIT use and HIT capital
EMR capability
EHR adoption
FE
PA
DID
PA
FE
None
IV
IV, FE
PDA
CSA
PDA
CSA
PDA
PDA
CSA
PDA
Main Findings
Negative . HIT has no effect on medical expenditures and patient outcomes.
Positive . Increased EHR capability yielded increased process quality
Positive . EHR capability was positively associated with operational performance
Negative . There was no relationship between HIT and mortality
Positive . Use of eMAR and
CPOE improved adherence to medication guidelines
Mixed.
EHR adoption was initially associated with increased cost, which decreased after 3 years if complementary conditions were met.
Positive . HIT use and HIT capital positively related to operation and financial performance
Mixed . Advanced EHR improved ED efficiency, but basic EHR did not.
Positive . EHR reduced neonatal mortality
26
Romano and
Stafford
(2011)
Furukawa et al. (2010)
Himmelstein et al. (2010)
Jones et al.
(2010)
McCullough et al. (2010)
This paper
NAMCS,
NHAMCS
COSHPD,
HADB
CMS-CR,
DHA, HADB
AHA, CMS-
HC, HADB
AHA, CMS-
HC, HADB
CMS-CR,
CMS-EHRIP,
CMS-HC,
HADB, JC
2005-
2007
1998-
2007
2003-
2007
2004,
2007 †
2004-
2007
2011Q4,
2013Q1
243,478 patient visits
326 hospitals in
California
Approx.
4,000 hospitals
2,086 hospitals
3,401 hospitals
2,747 hospitals
Ambulatory quality
Nurse staffing and nurse-sensitive patient outcomes
Hospital costs and quality
Process quality
Process quality
Process quality
Use of EHR and
CDS
EHR implementation
Degree of
Computerization
HIT adoption
(EHR & CPOE)
Meaningful use of
EHR
None
FE
None
CSA
PDA
CSA
EHR capability DID, FE, PA PDA
FE PDA
DID, FD, PA PDA
Negative . EHR and CDS were not associated with ambulatory care quality
Negative . EHR systems did not decease hospital costs, length of stay, and nurse staffing levels.
Negative . Computerization had no effect on hospital costs and quality
Mixed . Adopting basic EHR significantly increased care quality of heart failure, but adopting advanced EHR significantly decrease care quality of acute myocardial infarction and heart failure.
Mixed . HIT adoption improved 2 of 6 process quality measures
Positive . Meaningful use of
EHR significantly improves quality of care, and the effect is larger among hospitals which are small, non-teaching or located in rural area.
† Technology use/adoption is determined a year before.
Note . AHA=American Hospital Association Annual Survey; CDC-VSCP=Centers for Disease Control (CDC) and Prevention's Vital Statistics
Cooperative Program; CDS=Clinical decision support; CMS-AIPPS=CMS Inpatient Prospective Payment System; CMS-CR=CMS Cost Reports;
CMS-EHRIP=CMS EHR Incentive Programs; CMS-HC=CMS Hospital Compare database; COSHPD=California Office of Statewide Health
Planning and Development Annual Financial Disclosure Reports and Patient Discharge Databases; CPOE=Computerized physician order entry;
DID=Difference-in-differences; DHA=Dartmouth Health Atlas; ED=Emergency department; EHR=Electronic health records; FD=Firstdifference; FE=Fixed-effects; HADB=Healthcare Information and Management Systems Society’s Analytics Database; IV=Instrument variables;
JC=the Joint Commission; MC=Medicare claims; NAMCS=National Ambulatory Medical Care Survey; NHAMCS=National Hospital
Ambulatory Medical Care Survey; PA=Propensity adjustments; WSDH=Washington State Department of Health hospital database
27
Table 2: Data Descriptions
Variable Type Description Source
Meaningful Use
(MU)
Quality of Care
Binary, time-invariant
Numeric, timevarying
To indicate whether a hospital reaches MU in a specific point of time
The composite quality score for the process of care, range from 0 (lowest quality) to 100 (highest quality)
CMS-
EHRIP
JC
Age of hospital as of 2012 (2012 – the year formed) HADB Age Numeric, timeinvariant
Size Numeric, timevarying
Annual Discharges Numeric, time-
Annual Inpatient
Days
Annual Medicare
Discharges
Annual Medicare
Inpatient Days
Transfer adjusted case mix index
(TACMI)
Teaching status varying
Numeric, timevarying
Numeric, timevarying
Numeric, timevarying
Numeric, timevarying
Ownership
Rural area
Region
Binary, time-invariant
Categorical, time-invariant
Binary, timeinvariant
Categorical, time-invariant
Total number of hospital beds
Total number of inpatient discharges in a year
Total number of inpatient days in a year
Total number of Medicare inpatient discharges in a year
CMS-CR
Total number of Medicare inpatient days in a year CMS-CR
A value used to characterize the overall severity of the patient base of the hospital
Whether the hospital is a member in COTH
Whether the hospital is owned by a government, non-profit, or proprietary agency
Whether the hospital is located in a rural area
CMS-CR
CMS-CR
CMS-CR
CMS-IPPS
COTH
CMS-HC
Whether the hospital is located in Midwest,
Northeast, South, or West
Note . COTH=Council of Teaching Hospitals; RUCA= Rural Urban Commuting Area. Other abbreviations follow Table 1.
RUCA 2.0,
CMS-HC
Census regions,
CMS-HC
28
Table 3: Stage 1 MU Core and Menu Objectives
2
3
4
5
6
7
8
9
10
Core Objectives
1 CPOE for Medication Orders
2
3
4
5
6
Drug Interaction Checks
Maintain Problem List
Active Medication List
Medication Allergy List
Record Demographics
7
8
9
10
11
12
Record Vital Signs
Record Smoking Status
Clinical Quality Measures
Clinical Decision Support Rule
Electronic Copy of Health Information
Discharge Instructions
13 Electronic Exchange of Clinical Information
14 Protect Electronic Health Information
Menu Objectives
1 Drug Formulary Checks
Advanced Directives
Clinical Lab Test Results
Patient Lists
Patient-specific Education Resources
Medication Reconciliation
Transition of Care Summary
Immunization Registries Data Submission
Reportable Lab Results
Syndromic Surveillance Data Submission
Table 4: Transfer factors for eligible hospitals
2011
Payment 2011 1
2012 0.75
2013 0.5
2014 0.25
2015
2016
2012
Demonstrate MU
2013 2014
1
0.75
0.5
0.25
1
0.75
0.5
0.75
0.5
0.25 0.25
2015
0.5
0.25
29
# of hospitals
Mean Age
Mean Size
Mean Total Inpatient Discharges
Mean Total Inpatient Days
Mean Medicare Inpatient Discharges
Mean Medicare Inpatient Days
Mean TACMI
Percent of Teaching Hospitals
Percent of Rural Hospitals
Percent of Government Hospitals
Percent of Nonprofit Hospitals
Percent of Proprietary Hospitals
Percent of Hospitals in the Midwest
Percent of Hospitals in the Northeast
Percent of Hospitals in the South
Percent of Hospitals in the West
Table 5. Summary Statistics
Treatment
Group
Control
Group
AlwaysMU
P-
Value
Control
Group
NeverMU
P-
Value
914 483 947
41.63 (37.39) 35.39 (33.44) 0.001 38.21 (37.53) 0.049
229.2 (201.9) 248.3 (194.9) 0.085
11.52 (10.87) 12.62 (10.66) 0.067
212.1 (182)
10.38 (9.71)
0.056
0.017
53.71 (58.49) 59.15 (55.84) 0.089
3.75 (3.182) 3.912 (3.344) 0.383
48.41 (51.4)
3.399 (3.072)
0.038
0.016
19.28 (18.55) 20.15 (18.7) 0.408 17.59 (17.56) 0.043
1.459 (0.261) 1.521 (0.255) < 0.001 1.472 (0.269) 0.301
10.5 %
30.2 %
15.2 %
67.2 %
17.6 %
24 %
19.7 %
39.9 %
16.4 %
11.6 %
20.7 %
12.4 %
60.7 %
26.9 %
23 %
14.1 %
45.3 %
17.6 %
0.533
< 0.001
0.157
0.015
< 0.001
0.682
0.009
0.051
0.573
9.2 %
29.4 %
17.6 %
63.8 %
18.6 %
19.6 %
15.6 %
43.4 %
21.3 %
0.34
0.692
0.158
0.123
0.587
0.024
0.021
0.13
0.007
30
Table 6. Panel Data Models for the Quality Effect of MU
Choice of estimator DID DID DID DID FD FD FD FD
Choice of control group AlwaysMU AlwaysMU NeverMU NeverMU AlwaysMU AlwaysMU NeverMU NeverMU
Includes control variables Yes Yes Yes Yes
TreatmentGroup
PostPeriod
TreatmentGroup ×
PostPeriod
Age
Size
TotalDischarges
TotalInpatientDays
MedicareDischarges
MedicareInpatientDays
TACMI
Rural
Teach
Nonprofit
Proprietary
Midwest
Northeast
South
Constant
-.629*** -.337** -.040 -.083
(.154) (.145) (.133) (.129)
.389*** .322*** .262*** .180**
(.133)
.319**
(.155)
(.136)
.320**
(.156)
(.078)
.446***
(.111)
(.078)
.479***
(.112)
.319**
(.155)
.328**
(.156)
.446*** .438***
(.111) (.112)
-.003*
(.001)
-.000
(.001)
.074**
(.020)
-.014*
(.005)
.224*
-.003**
(.001)
-.000
(.001)
.041
(.022)
-.009
(.005)
.397***
.000
(.001)
-.016
(.040)
.010
(.014)
.097
(.077)
-.039
(.015)
2.447***
(.428)
-.777***
(.078)
-.058***
(.014)
2.067***
(.335)
-.394***
(.156)
-.009
(.039)
-.535
(.820)
(.151)
.007
(.030)
-1.164*
(.715)
(.162)
-.253
(.142)
.576***
(.200)
1.220***
(.226)
.765***
(.154)
-.121
(.139)
.314**
(.153)
.722***
(.210)
.613***
(.220)
.591***
(.210)
.741***
(.255)
.197
(.227)
(.219)
.357**
(.200)
98.001*** 93.378*** 97.412*** 93.615*** .389*** .407*** .262*** .313***
(.121) (.805) (.094) (.600) (.133) (.137) (.078) (.087)
.000
(.001)
-.006
(.017)
-.016
(.014)
.153
1861 Number of observations 2794 2794 3722 3722 1397 1397 1861
Note . Robust standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01)
31
Table 7. Results from the Matched Dataset
Choice of estimator DID DID DID DID FD FD FD FD
Choice of control group AlwaysMU AlwaysMU NeverMU NeverMU AlwaysMU AlwaysMU NeverMU NeverMU
Includes control variables Yes Yes Yes Yes
TreatmentGroup
PostPeriod
TreatmentGroup ×
PostPeriod
Age
Size
TotalDischarges
TotalInpatientDays
MedicareDischarges
MedicareInpatientDays
TACMI
Rural
Teach
Nonprofit
Proprietary
Midwest
Northeast
South
Constant
-.332*** -.362*** .006 -.001
(.073) (.067) (.072) (.067)
.434*** .332*** .344*** .265***
(.056) (.058) (.041) (.042)
.281*** .305*** .351*** .379*** .281*** .285*** .351*** .343***
(.071) (.072) (.059) (.059) (.071) (.072) (.059) (.059)
-.005***
(.001)
-.000
(.000)
.074***
(.010)
-.015***
(.002)
.181***
-.003***
(.001)
-.001
(.000)
.034***
(.011)
-.006*
(.003)
.471***
.000
(.001)
-.006
(.020)
.002
(.007)
.023
.001
(.001)
-.009
(.008)
-.028***
(.008)
.209
(.036)
-.031**
(.007)
2.844***
(.215)
-.730***
(.043)
-.072***
(.008)
2.194***
(.197)
-.498***
(.071)
.007
(.019)
.017
(.405)
(.083)
.017
(.018)
-1.673***
(.379)
(.075)
-.399***
(.071)
.755***
(.095)
1.192***
(.122)
.822***
(.081)
-.239**
(.078)
.229***
(.083)
.821***
(.107)
.656***
(.104)
.669***
(.119)
.826***
(.122)
.216**
(.109)
(.125)
.379***
(.117)
97.710*** 92.796*** 97.378*** 93.417*** .434*** .438*** .344*** .410***
(.051) (.379) (.051) (.349) (.056) (.060) (.041) (.046)
6678 Number of observations 11664 11664 13356 13356 5832 5832 6678
Note . Robust standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01)
32
Table 8. Results from the Quantile Analysis with the DID Specification
Choice of control group
Quantile
TreatmentGroup
PostPeriod
TreatmentGroup ×
PostPeriod
Age
Size
TotalDischarges
TotalInpatientDays
MedicareDischarges
MedicareInpatientDays
TACMI
Rural
Teach
Nonprofit
Proprietary
Midwest
Northeast
South
Constant
AlwaysMU AlwaysMU AlwaysMU NeverMU NeverMU NeverMU
0.25 0.5 0.75 0.25 0.5 0.75
-.597*** -.468*** -.222*** -.183*
(.135) (.091) (.077) (.135)
.614***
(.131)
.075
(.162)
.424***
(.064)
.137*
(.108)
.302***
(.053)
.039
(.093)
-.004*** -.002*** -.001**
.297***
(.124)
.431***
(.181)
-.114
(.095)
.319***
(.085)
.241**
(.123)
.003
(.062)
.234***
(.054)
.078
(.081)
-.002*** -.002*** -.002***
(.001)
-.001
(.001)
.063***
(.021)
-.009**
(.006)
.124*
(.081)
(.001)
-.000
(.001)
.026***
(.014)
-.004*
(.004)
.057**
(.050)
(.001)
-.001**
(.000)
.018*
(.012)
-.002
(.003)
-.054*
(.044)
(.001)
-.001
(.001)
.030*
(.028)
-.004
(.006)
.355***
(.091)
(.001)
-.001
(.001)
.027***
(.014)
-.005***
(.003)
.108***
(.047)
(.001)
-.000
(.000)
.010
(.012)
-.002
(.003)
.009
(.060)
-.022
(.015)
-.013**
(.010)
1.624*** .850***
(.242) (.161)
.007
(.008)
.287***
(.128)
-1.136*** -.476*** -.150*
(.162) (.118) (.085)
-.059*** -.013**
(.017) (.009)
-.003
(.012)
1.870*** .949*** .437***
(.176) (.140) (.112)
-.649*** -.302*** -.054
(.138) (.086) (.057)
-.148
(.150)
.822***
(.207)
-.146*** -.152**
(.094) (.076)
.486***
(.149)
.139*
(.079)
1.334*** .949***
(.227) (.157)
.486***
(.141)
.305***
(.090)
.513***
(.091)
.174**
(.084)
-.037
(.148)
.628***
(.167)
(.188)
.606***
(.129)
-.018
(.101)
.265*** .156**
(.115)
1.037*** .634*** .384***
(.141)
.310***
(.087)
-.147**
(.106)
(.059)
(.075)
.215***
(.072)
.378***
(.164)
.206***
(.085)
.060
(.095)
.832***
(.144)
.362*** .290***
(.081) (.080)
.105 .156** .126* .295** .217*** .282***
(.151) (.074) (.086) (.140) (.084) (.070)
94.204*** 96.734*** 98.659*** 93.134*** 96.294*** 98.051***
(.475) (.309) (.246) (.355) (.262) (.187)
Number of observations 2794 2794 2794 3722 3722 3722
Note . Bootstrapped standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01)
33
Table 9. Results from the Quantile Analysis with the FD Specification
Choice of control group
Quantile
AlwaysMU AlwaysMU AlwaysMU NeverMU NeverMU NeverMU
0.25 0.5 0.75 0.25 0.5 0.75
TreatmentGroup
PostPeriod
TreatmentGroup ×
PostPeriod
Age
Size
TotalDischarges
TotalInpatientDays
MedicareDischarges
MedicareInpatientDays
TACMI
Constant
-.597*** -.468*** -.222*** -.183*
(.135) (.091) (.077) (.135)
-.114
(.095)
.003
(.062)
.614***
(.131)
.228**
(.105)
.424***
(.064)
.098*
(.067)
.302***
(.053)
.004
(.044)
.297***
(.124)
.612***
(.093)
.319***
(.085)
.383***
(.053)
.234***
(.054)
.228***
(.043)
-.006*** -.004*** -.003*** -.003*** -.004*** -.002***
(.001)
-.000
(.001)
.114***
(.001)
.000
(.001)
.053***
(.000)
.000
(.000)
-.003
(.019) (.014) (.012)
-.024*** -.012*** -.001
(.005)
.130*
(.083)
(.003)
.024
(.056)
(.003)
.018
(.045)
(.001)
-.000
(.001)
.078***
(.029)
(.006)
.252***
(.090)
(.001)
-.000
(.000)
.030***
(.014)
(.001)
.000
(.000)
-.006**
-.017*** -.008*** -.002
(.003)
.109***
(.054)
(.015)
(.004)
.059**
(.059)
-.012
(.015)
-.001
(.011)
-.002
(.009)
2.544*** 1.240*** .373***
(.317) (.187) (.128)
-.027***
(.016)
-.009
(.010)
-.008
(.011)
2.096*** 1.087*** .399***
(.190) (.119) (.117)
93.466*** 96.699*** 98.927*** 93.521*** 96.626*** 98.599***
(.513) (.290) (.192) (.310) (.183) (.177)
Number of observations 2794 2794 2794 3722 3722 3722
Note . Bootstrapped standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01)
34
Table 10. Results from the Censored Regression Analysis with the DID Specification
Choice of control group
Includes control variables
TreatmentGroup
PostPeriod
TreatmentGroup ×
PostPeriod
Age
Size
TotalDischarges
TotalInpatientDays
MedicareDischarges
MedicareInpatientDays
TACMI
Rural
Teach
Nonprofit
Proprietary
Midwest
Northeast
South
Constant
AlwaysMU AlwaysMU NeverMU NeverMU
Yes Yes
-.342*** -.376*** .010
(.075) (.069) (.070)
.503***
(.075)
.246**
.384***
(.070)
.276***
.370***
(.070)
.356***
.002
(.066)
.273***
(.066)
.384***
(.107) (.098) (.099) (.094)
-.005***
(.001)
-.000
(.001)
.074***
(.017)
-.015***
(.004)
.163***
(.058)
-.004***
(.001)
-.001**
(.000)
.053***
(.013)
-.007**
(.003)
.441***
(.055)
-.029***
(.011)
3.014***
-.071***
(.011)
2.691***
(.142)
-.686***
(.067)
-.397***
(.103)
.780***
(.072)
1.356***
(.127)
-.384***
(.063)
-.309***
(.099)
.229***
(.069)
.930***
(.093)
.829***
(.084)
.655***
(.085)
.704***
(.079)
.920***
(.092)
.245***
(.086)
.470***
(.080) (.073)
97.772*** 92.619*** 97.426*** 92.728***
(.053) (.239) (.049) (.212)
Number of observations 2794 2794 3722 3722
Note . Standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01)
35
Table 11. Analysis of Continuous MU Variable using the FD Specification
Control Group
Continuous MU
Variable Construction
ContinuousMU
AlwaysMU AlwaysMU NeverMU NeverMU AlwaysMU AlwaysMU NeverMU NeverMU
Avg CMs Prod CMs Avg CMs Prod CMs
Avg CMs
& MMs
Prod CMs
& MMs
Avg CMs
& MMs
Prod CMs
& MMs
Size
TotalDischarges
TotalInpatientDays
MedicareDischarges
MedicareInpatientDays
TACMI
Constant
.381** .694*** .374** .600** .504*** .783*** .505*** .867***
(.170)
.000
(.001)
-.017
(.040)
(.257)
.000
(.001)
-.017
(.040)
(.168)
.000
(.001)
-.017
(.040)
(.284)
.000
(.001)
-.015
(.040)
(.122)
.000
(.001)
-.006
(.017)
(.193)
.000
(.001)
-.006
(.017)
(.122)
.000
(.001)
-.006
(.017)
(.220)
.000
(.001)
-.006
(.017)
.010 .009 .010 .009 -.016 -.017 -.016 -.016
(.014) (.014) (.014) (.014) (.014) (.014) (.014) (.014)
.108 .126 .106 .098 .161 .169 .159 .148
(.157) (.155) (.157) (.155) (.151) (.151) (.151) (.151)
-.010 -.010 -.010 -.006 .006 .006 .006 .008
(.039) (.039) (.039) (.039) (.030) (.030) (.030) (.030)
-.542 -.489 -.538 -.460 -1.153* -1.125* -1.150* -1.114*
(.819) (.822) (.819) (.823) (.714) (.718) (.714) (.719)
.391*** .339*** .396*** .421*** .298*** .299*** .299*** .324***
(.137) (.130) (.136) (.122) (.087) (.083) (.087) (.083)
Number of observations 1397 1397 1397 1397 1861
Note 1 . CMs = core MU measures; MMs = menu MU measures
1861 1861 1861
Note 2 . Robust standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01)
Table 12. Stratification Analysis on the Quality Effect of MU
Dimension
Size
Category
Small (less than 100 beds)
Medium (100 to 300 beds)
Large (more than 300 beds)
Ownership Government
Nonprofit
Proprietary
Teaching status Teach
Region
Urban status
Non-teach
Midwest
Northeast
South
West
Urban
Rural
Control Group AlwaysMU Control Group NeverMU
# of Tr/Co
Hospitals MU estimate
255 / 97 1.22 (.64)**
426 / 243 .14 (.15)
233 / 143 -.05 (.12)
139 / 60 .06 (.55)
614 / 293 .38 (.17)**
161 / 130 .22 (.38)
96 / 56 .03 (.17)
818 / 427 .37 (.18)**
219 / 111 .43 (.17)**
180 / 68 .32 (.45)
365 / 219 .36 (.31)
150 / 85 .17 (.17)
638 / 383 .06 (.10)
276 / 100 1.13 (.61)**
# of Tr/Co
Hospitals
255 / 268
426 / 478
233 / 201
139 / 167
614 / 604
161 / 176
96 / 87
818 / 860
219 / 186
180 / 148
365 / 411
150 / 202
638 / 669
276 / 278
MU estimate
.98 (.32)***
.24 (.13)*
.23 (.12)*
.96 (.33)***
.39 (.12)***
.08 (.32)
.23 (.21)
.46 (.12)***
.12 (.23)
.54 (.21)***
.62 (.20)***
.25 (.22)
.13 (.10)
1.16 (.29)***
Note . Robust standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01)
36
Figure 1. Historical quality trends of the treatment and control groups
Figure 2. Proportions of acute care hospitals attaining MU in each state from 2011 to 2013
37
Figure 3. Schematic Illustration of Quality Improvement among the Treatment and Control
Groups
38
Figure 4. Quality Effects of Meaningful Use in Subgroups (95% Confidence Interval)
39
Adler-Milstein, J., DesRoches, C. M., Furukawa, M. F., Worzala, C., Charles, D., Kralovec, P., Stalley,
S., and Jha, A. K. 2014. More Than Half of US Hospitals Have At Least A Basic EHR, But Stage
2 Criteria Remain Challenging For Most. Health Affairs 33 (9) 1664–1671.
Agarwal, R., Gao, G. (Gordon), DesRoches, C., and Jha, A. K. 2010. The Digital Transformation of
Healthcare: Current Status and the Road Ahead. Information Systems Research 21 (4) 796–809.
Agha, L. 2014. The Effects of Health Information Technology on the Costs and Quality of Medical Care.
Journal of Health Economics 34 19–30.
Ajzen, I. 1991. The Theory of Planned Behavior. Organizational Behavior and Human Decision
Processes 50 (2) 179–211.
Angrist, J. D., and Krueger, A. B. 1999. Chapter 23 Empirical strategies in labor economics. Handbook of
Labor Economics , Orley C. Ashenfelter and David Card (ed.), (Vol. Volume 3, Part A) 1277–
1366.
Angst, C. M., Agarwal, R., Sambamurthy, V., and Kelley, K. 2010. Social Contagion and Information
Technology Diffusion: The Adoption of Electronic Medical Records in U.S. Hospitals.
Management Science 56 (8) 1219–1241.
Angst, C. M., Devaraj, S., and D’Arcy, J. 2012. Dual Role of IT-Assisted Communication in Patient
Care: A Validated Structure-Process-Outcome Framework. Journal of Management Information
Systems 29 (2) 257–292.
Appari, A., Carian, E. K., Johnson, M. E., and Anthony, D. L. 2012. Medication Administration Quality and Health Information Technology: A National Study of US Hospitals. Journal of American
Medical Informatics Association 19 (3) 360–367.
Appari, A., Eric Johnson, M., and Anthony, D. L. 2013. Meaningful Use of Electronic Health Record
Systems and Process Quality of Care: Evidence from a Panel Data Analysis of U.S. Acute-Care
Hospitals. Health Services Research 48 (2pt1) 354–375.
Attewell, P. 1992. Technology Diffusion and Organizational Learning: The Case of Business Computing.
Organization Science 3 (1) 1–19.
Banker, R. D., and Kauffman, R. J. 2004. The Evolution of Research on Information Systems: A Fiftieth-
Year Survey of the Literature in Management Science. Management Science 50 (3) 281–298.
Bardhan, I., Zheng, E., Oh, J., and Kirksey, K. 2011. A Profiling Model for Readmission of Patients with
Congestive Heart Failure: A multi-hospital study. ICIS 2011 Proceedings
Bentley, T. G. k., Effros, R. M., Palar, K., and Keeler, E. B. 2008. Waste in the U.S. Health Care System:
A Conceptual Framework. Milbank Quarterly 86 (4) 629–659.
Bertrand, M., Duflo, E., and Mullainathan, S. 2004. How Much Should We Trust Differences-In-
Differences Estimates? The Quarterly Journal of Economics 119 (1) 249–275.
40
Berwick, D. M., and Hackbarth, A. D. 2012. Eliminating Waste in US Health Care. JAMA: The Journal of the American Medical Association 307 (14) 1513–1516.
Black, A. D., Car, J., Pagliari, C., Anandan, C., Cresswell, K., Bokun, T., McKinstry, B., Procter, R.,
Majeed, A., and Sheikh, A. 2011. The Impact of eHealth on the Quality and Safety of Health
Care: A Systematic Overview. PLoS Medicine 8 (1) e1000387.
Blumenthal, D. 2010. Launching HITECH. New England Journal of Medicine 362 (5) 382–385.
Blumenthal, D. 2011. Wiring the Health System — Origins and Provisions of a New Federal Program.
New England Journal of Medicine 365 (24) 2323–2329.
Blumenthal, D., and Tavenner, M. 2010. The “Meaningful Use” Regulation for Electronic Health
Records. New England Journal of Medicine 363 (6) 501–504.
Bodenheimer, T. 2005. High and Rising Health Care Costs. Part 1: Seeking an Explanation. Annals of
Internal Medicine 142 (10) 847–854.
Bos, J. V. D., Rustagi, K., Gray, T., Halford, M., Ziemkiewicz, E., and Shreve, J. 2011. The $17.1 Billion
Problem: the Annual Cost of Measurable Medical Errors. Health Affairs 30 (4) 596–603.
Brennan, T. A. 1998. The Role of Regulation in Quality Improvement. Milbank Quarterly 76 (4) 709–731.
Brynjolfsson, E., and Hitt, L. 1996. Paradox Lost? Firm-Level Evidence on the Returns to Information
Systems Spending. Management Science 42 (4) 541–558.
Brynjolfsson, E., Hu, Y. (Jeffrey), and Simester, D. 2011. Goodbye Pareto Principle, Hello Long Tail:
The Effect of Search Costs on the Concentration of Product Sales. Management Science 57 (8)
1373–1386.
Chassin, M. R., Loeb, J. M., Schmaltz, S. P., and Wachter, R. M. 2010. Accountability Measures —
Using Measurement to Promote Quality Improvement. New England Journal of Medicine 363 (7)
683–688.
Chatterjee, P., and Joynt, K. E. 2014. Do Cardiology Quality Measures Actually Improve Patient
Outcomes? Journal of the American Heart Association 3 (1) e000404.
Chaudhry, B., Wang, J., Wu, S., Maglione, M., Mojica, W., Roth, E., Morton, S. C., and Shekelle, P. G.
2006. Systematic Review: Impact of Health Information Technology on Quality, Efficiency, and
Costs of Medical Care. Annals of Internal Medicine 144 (10) 742 –752.
Chen, L. M., Jha, A. K., Guterman, S., Ridgway, A. B., Orav, E. J., and Epstein, A. M. 2010. Hospital
Cost of Care, Quality of Care, and Readmission Rates: Penny-Wise and Pound-Foolish? Archives of Internal Medicine 170 (4) 340–346.
Classen, D. C., and Bates, D. W. 2011. Finding the Meaning in Meaningful Use. New England Journal of
Medicine 365 (9) 855–858.
Davis, F. D., Bagozzi, R. P., and Warshaw, P. R. 1989. User Acceptance of Computer Technology: A
Comparison of Two Theoretical Models. Management Science 35 (8) 982–1003.
41
DesRoches, C. M., Campbell, E. G., Rao, S. R., Donelan, K., Ferris, T. G., Jha, A., Kaushal, R., Levy, D.
E., Rosenbaum, S., Shields, A. E., and Blumenthal, D. 2008. Electronic Health Records in
Ambulatory Care — A National Survey of Physicians. New England Journal of Medicine 359 (1)
50–60.
DesRoches, C. M., Worzala, C., Joshi, M. S., Kralovec, P. D., and Jha, A. K. 2012. Small, Nonteaching,
And Rural Hospitals Continue To Be Slow In Adopting Electronic Health Record Systems.
Health Affairs 31 (5) 1092–1099.
Devaraj, S., and Kohli, R. 2003. Performance Impacts of Information Technology: Is Actual Usage the
Missing Link? Management Science 49 (3) 273–289.
Dey, A., Sinha, K. K., and Thirumalai, S. 2013. IT Capability for Health Care Delivery: Is More Better?
Journal of Service Research 16 (3) 326–340.
Dranove, D., Forman, C., Goldfarb, A., and Greenstein, S. 2012. The Trillion Dollar Conundrum:
Complementarities and Health Information Technology . Technical Report (NBER Working
Paper18281),
Eddy, D. M. 2005. Evidence-Based Medicine: A Unified Approach. Health Affairs 24 (1) 9 –17.
Feltham, G. A., and Ohlson, J. A. 1999. Residual Earnings Valuation With Risk and Stochastic Interest
Rates. The Accounting Review 74 (2) 165–183.
Fenton, J. J., Jerant, A. F., Bertakis, K. D., and Franks, P. 2012. The Cost of Satisfaction: A National
Study of Patient Satisfaction, Health Care Utilization, Expenditures, and Mortality. Archives of
Internal Medicine 172 (5) 405–411.
Friedman, B., and Basu, J. 2004. The Rate and Cost of Hospital Readmissions for Preventable
Conditions. Medical Care Research and Review 61 (2) 225–240.
Furukawa, M. F. 2011. Electronic Medical Records and the Efficiency of Hospital Emergency
Departments. Medical Care Research and Review 68 (1) 75–95.
Furukawa, M. F., Raghu, T. S., and Shao, B. B. M. 2010. Electronic Medical Records, Nurse Staffing, and Nurse-Sensitive Patient Outcomes: Evidence from California Hospitals, 1998–2007. Health
Services Research 45 (4) 941–962.
Hah, H., and Bharadwaj, A. 2012. A Multi-level Analysis of the Impact of Health Information
Technology on Hospital Performance. Proceedings of the Thirty Third International Conference on Information Systems (ICIS)
Hall, S. A., Kaufman, J. S., and Ricketts, T. C. 2006. Defining Urban and Rural Areas in U.S.
Epidemiologic Studies. Journal of Urban Health 83 (2) 162–175.
Himmelstein, D. U., Wright, A., and Woolhandler, S. 2010. Hospital Computing and the Costs and
Quality of Care: A National Study. The American Journal of Medicine 123 (1) 40–46.
IOM. 2001. Crossing the Quality Chasm: A New Health System for the 21st Century , The National
Academies Press, Washington, D.C.
42
Jencks, S. F., Williams, M. V., and Coleman, E. A. 2009. Rehospitalizations among Patients in the
Medicare Fee-for-Service Program. New England Journal of Medicine 360 (14) 1418–1428.
Jha, A. K., DesRoches, C. M., Campbell, E. G., Donelan, K., Rao, S. R., Ferris, T. G., Shields, A.,
Rosenbaum, S., and Blumenthal, D. 2009. Use of Electronic Health Records in U.S. Hospitals.
New England Journal of Medicine 360 (16) 1628–1638.
Jha, A. K., Orav, E. J., Li, Z., and Epstein, A. M. 2007. The Inverse Relationship Between Mortality
Rates And Performance In The Hospital Quality Alliance Measures. Health Affairs 26 (4) 1104–
1110.
Jones, S. S., Adams, J. L., Schneider, E. C., Ringel, J. S., and McGlynn, E. A. 2010. Electronic Health
Record Adoption and Quality Improvement in US Hospitals. American Journal of Managed Care
16 (12 Suppl HIT) SP64–71.
Jones, S. S., Heaton, P. S., Rudin, R. S., and Schneider, E. C. 2012. Unraveling the IT Productivity
Paradox — Lessons for Health Care. New England Journal of Medicine 366 (24) 2243–2245.
Jones, S. S., Rudin, R. S., Perry, T., and Shekelle, P. G. 2014. Health Information Technology: An
Updated Systematic Review with a Focus on Meaningful Use. Annals of Internal Medicine
160 (1) 48–54.
Kane, G. C., and Alavi, M. 2008. Casting the Net: A Multimodal Network Perspective on User-System
Interactions. Information Systems Research 19 (3) 253–272.
Kern, L. M., Dhopeshwarkar, R., Barrón, Y., Wilcox, A., Pincus, H., and Kaushal, R. 2009. Measuring the Effects of Health Information Technology on Quality of Care: A Novel Set of Proposed
Metrics for Electronic Quality Reporting. Joint Commission Journal on Quality and Patient
Safety 35 (7) 359–369.
Koenker, R., and Bassett, G. 1978. Regression Quantiles. Econometrica 46 (1) 33–50.
Kohli, R., and Devaraj, S. 2003. Measuring Information Technology Payoff: A Meta-Analysis of
Structural Variables in Firm-Level Empirical Research. Information Systems Research 14 (2)
127–145.
Ma, J., and Stafford, R. S. 2005. Quality of US Outpatient Care: Temporal Changes and Racial/Ethnic
Disparities. Archives of Internal Medicine 165 (12) 1354–1361.
McCullough, J. S., Casey, M., Moscovice, I., and Prasad, S. 2010. Electronic Health Records and Clinical
Decision Support Systems: Impact on National Ambulatory Care Quality. Health Affairs 29 (4)
647–654.
McCullough, J. S., Parente, S., and Town, R. 2013. Health Information Technology and Patient
Outcomes: The Role of Organizational and Informational Complementarities . Technical Report
(NBER Working Paper18684),
Miller, A. R., and Tucker, C. E. 2011. Can Health Care Information Technology Save Babies? Journal of
Political Economy 119 (2) 289–324.
43
Mithas, S., and Krishnan, M. S. 2009. From Association to Causation Via a Potential Outcomes
Approach. Information Systems Research 20 (2) 295–313.
Ohlson, J. A. 1995. Earnings, Book Values, and Dividends in Equity Valuation. Contemporary
Accounting Research 11 (2) 661–687.
Ransbotham, S., and Overby, E. 2010. Does Information Technology Increase or Decrease Hospitals
Risk? An Empirical Examination of Computerized Physician Order Entry and Malpractice
Claims. ICIS 2010 Proceedings
Romano, M. J., and Stafford, R. S. 2011. Electronic health records and clinical decision support systems: impact on national ambulatory care quality. Archives of Internal Medicine 171 (10) 897–903.
Rosenbaum, P. R., and Rubin, D. B. 1983. The Central Role of the Propensity Score in Observational
Studies for Causal Effects. Biometrika 70 (1) 41–55.
Rubin, H. R., Pronovost, P., and Diette, G. B. 2001. The Advantages and Disadvantages of Process
‐ based
Measures of Health Care Quality. International Journal for Quality in Health Care 13 (6) 469–
474.
Sekhon, J. S. 2011. Multivariate and Propensity Score Matching Software with Automated Balance
Optimization: The Matching package for R. Journal of Statistical Software 42 (i07)
Shwartz, M., Cohen, A. B., Restuccia, J. D., Ren, Z. J., Labonte, A., Theokary, C., Kang, R., and Horwitt,
J. 2011. How Well Can We Identify the High-Performing Hospital? Medical Care Research and
Review 68 (3) 290–310.
Snyder, B. 2013. How Kaiser Bet $4 Billion on Electronic Health Records -- and Won. InfoWorld
Tambe, P., and Hitt, L. M. 2012. The Productivity of Information Technology Investments: New
Evidence from IT Labor Data. Information Systems Research 23 (3-part-1) 599–617.
Tobin, J. 1958. Estimation of relationships for limited dependent variables. Econometrica 24–36.
Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. 2003. User Acceptance of Information
Technology: Toward a Unified View. MIS Quarterly 27 (3) 425–478.
Wejnert, B. 2002. Integrating Models of Diffusion of Innovations: A Conceptual Framework. Annual
Review of Sociology 28 (1) 297–326.
Wooldridge, J. M. 2002. Econometric Analysis of Cross Section and Panel Data , MIT Press, Cambridge,
MA.
Xue, M., Hitt, L. M., and Chen, P. 2011. Determinants and Outcomes of Internet Banking Adoption.
Management Science 57 (2) 291–307.
44