MELBOURNE EPICENTRE Hospital Mortality Indicator (HMI) Review CASE STUDIES Part 1 DATE: 10 September 2013 PREPARED BY Melbourne EpiCentre in collaboration with Fiona Landgren and Carole Staley Melbourne EpiCentre c/-The Royal Melbourne Hospital – 7 East, Grattan Street, Parkville 3050 Tel: 03 9342 8772 Fax: 03 9342 8780 Hospital Mortality Indicator (HMI) Review TABLE OF CONTENTS Case Study Canada..................................................................................................2 1.1 Background – the Canadian Health Indicators Project ...................................................2 1.2 Hospital Standardised Mortality Ratio (HSMR) ..............................................................6 1.3 Canadian Hospital Reporting Project (CHRP) ...............................................................20 Case Study United Kingdom (England, Scotland, Wales) ........................................ 31 2.1 Background ...................................................................................................................31 2.2 Overview of the indicators and reporting approaches.................................................36 2.3 Mortality indicators ......................................................................................................44 Melbourne EpiCentre 1 Hospital Mortality Indicator (HMI) Review Case Study Canada 1.1 Background – the Canadian Health Indicators Project Canada has a strong interest in the use of administrative data for healthcare monitoring and improvement. The Canadian Institute of Health Information (CIHI), in conjunction with Statistics Canada, gathers, measures and analyses healthcare data to support quality of care and health outcomes. For more than a decade this work has been guided by a Health Indicator Framework which sets out the key outcomes and domains of measurement (Figure 1). Figure 1. CIHI-Statistics Canada Health Indicator Framework (1999) Source: CIHI Health Indictors 2013 Melbourne EpiCentre 2 Hospital Mortality Indicator (HMI) Review The Health Indicators Project began in Canada in 1999, in response to a growing demand to provide Canadians with information reflecting the health of Canadians and the Health of the health system. The next decade saw the conduct of three consensus conferences aimed at prioritising, developing and validating indicators, as well as the developing reporting systems to facilitate healthcare service access and public access to relevant data. Over 80 indicators measure the health of the Canadian population and the effectiveness of the health care system. The current specifications for which are outlined in Health Indicators 2013, Definitions, Data Sources and Rationale. To facilitate improved access to timely data, CIHI offers data-submitting organisations an e-Reporting tool. Registered organisations can use it to submit and access their data electronically through a secure website. Applications available through Client Services e-Reporting: Hospital Standardized Mortality Ratio (HSMR) – discussed further below Canadian Hospital Reporting Project (CHRP) e-Tool – discussed further below National Ambulatory Care Reporting System (NACRS) National Rehabilitation Reporting System (NRS) Continuing Care Reporting System (CCRS) Home Care Reporting System (HCRS) Canadian MIS Database (CMDB) Annual reports of indictors (most recently Health Indicators 2013) report performance for a number of indicators and provide regional and hospital breakdowns. They also report on the changes to the indictors and on future developments. Health Indicators 2013 is the 14th and final report of its kind before reporting moves to a digital interactive format in 2014. Various online reporting mechanisms do already exist, enabling hospital and public access to data relating to regional and health services level: Health indicators interactive tool (from 2006) – Designed to provide comparable information at the health region and provincial/territorial levels, these data are produced from a wide range of the most recently available sources to support regional health authorities as they monitor the health of their populations and the functioning of their local health systems with quality comparative information, CIHI provides a range of free, aggregate-level data on health indicators, presented in one of two ways: o Pre-formatted tables that provide a snapshot of the data o Interactive data that provides a dynamic presentation of health statistics—data can be manipulated, printed and exported. Health Profile (from 2009) - The Health Profile includes data from the Canadian Community Health Survey, CIHI, Canadian Cancer Registry, and Vital Statistics. Health Trends (from 2010) - The Health Trends will also be updated in Fall 2013 to reflect the latest comparable time-series data from the Canadian Community Health Survey, Canadian Cancer Registry, and Vital Statistics. At a provincial level, several health (quality) councils have been established in recent years (in New Brunswick, Quebec, Ontario, Saskatchewan, Alberta and British Columbia) with a mandate to report to the public on health system performance. Other initiatives in Canada and internationally complicate this landscape even more: the Organisation for Economic Co‑operation and Development (OECD) and The Commonwealth Fund release comparative performance indicators every year or Melbourne EpiCentre 3 Hospital Mortality Indicator (HMI) Review every other year, pan-Canadian organizations such as the Canadian Partnership Against Cancer release performance reports on parts of the system, and other national and international organizations release their own performance reports. This large number of organizations reporting concurrently and in an uncoordinated fashion on health system performance at various levels has led to confusion for health system decision-makers and Canadians alike. Recent consultations with health services have pointed to the need to clarify and better position health system performance public reporting in Canada, and ensure reporting supports the performance improvement efforts of jurisdictions. The priorities identified for the next three years (from 2013) include: Provide structured and coordinated pan-Canadian reporting on health system performance that is tailored to the information needs of different audiences, including the general public, provincial health ministries, regional health authorities and health care facilities; Produce analytical tools and products that support provincial and territorial health system improvement priorities; Work with our partners in the health system to build capacity for using and understanding performance measurement and analytical tools; and Reduce indicator chaos in the health system by working with our partners to identify which health indicators are most important, how they relate to each other and how they can best support improvements to health care and the health of Canadians. Like many other systems, the focus of health system reporting has been on the acute care (hospital) sector. This relates to the high level of spend in this sector as well as the accessibility of information. The situation is beginning to change with a shift in focus towards primary care, palliative care, home and community based care and patients’ experiences of the healthcare system. Supporting the new direction is a revised Health System Performance Framework which reflects the current emphasis on value for money, patient safety and patient-centredness. The Framework was still in its final stages of development at the time of writing, however the draft is shown in Figure 2. It comprises four interrelated quadrants: health system outcomes; social determinants of health; health system outputs; and health system inputs and characteristics. Each quadrant contains different dimensions of performance, with the dimension of equity spanning a number of these dimensions. Reducing the ‘indicator chaos’ by establishing a clearer reporting framework will also be a focus (Figure 3), with fewer but more meaningful and comparable indicators being reported at a public level. A recent focus of reporting has been on health disparities between socioeconomic groups. In the 2013 report 13 additional indicators are reported by socio-economic status (SES) at national and provincial levels. These include two of the indicators of interest to this review, namely 30-day acute myocardial infarction in-hospital mortality and 30-day stroke in-hospital mortality. Melbourne EpiCentre 4 Hospital Mortality Indicator (HMI) Review Figure 2. Draft Health System Performance Framework 2013 Source: CIHI Health Indicators 2013 Melbourne EpiCentre 5 Hospital Mortality Indicator (HMI) Review Figure 3. Reporting framework CIHI Indicators Source: CIHI Health Indicators 2013 Two summary measures of disparity are also presented including the disparity rate ratio, which provides the magnitude of the socio-economic disparities for a health indicator when comparing the least affluent to the most affluent group in a jurisdiction; and the potential rate reduction, which expresses—as a percentage—the reduction in a health indicator rate that would occur in the hypothetical scenario each neighbourhood income group experienced the rate of the most affluent neighbourhood income quintile. These reflections on the last decade of indicator development in Canada, and vision for the future are described in the most recent annual report Health Indicators 2013. The following two sections focus on the indicators of interest to this review including Hospital Standardised Mortality Ratio (HSMR) and condition specific in-hospital mortality indicators. It is noted that these two types of indicators are positioned separately on the CIHI website, and with different reporting systems. The reasons for this are not clearly evident. 1.2 Hospital Standardised Mortality Ratio (HSMR) In June 2005, at the request of hospitals, health regions and patient safety experts, the Canadian Institute for Health Information (CIHI) undertook to adapt HSMR calculation methods for use in Canada. Later that year, CIHI invited eligible acute care facilities and regions to join in the validation of their HSMR results. HSMR: A New Approach for Measuring Hospital Mortality Trends in Canada (2007) provided a summary of Canadian results to date. It also included the first publicly available HSMR results by health region and hospital. Melbourne EpiCentre 6 Hospital Mortality Indicator (HMI) Review The positioning of HSMR in the suite of quality and performance measures for CIHI is for comparisons and quality improvement within rather than between hospitals. It is noted in the explanatory material that while the HSMR takes into consideration many of the factors associated with the risk of dying, it cannot adjust for every factor. Therefore, the HSMR is most useful to individual hospitals to track their own mortality trends and to tie it to actions that make a difference to quality of care and tied to actions that make a difference in quality of care. Definition and calculation of HSMR Definition and calculation of the HSMR is described in the Technical Notes (April 2013). HSMR is the ratio of actual (observed) deaths to expected deaths. It focuses on the diagnosis groups that account for the majority of in-hospital deaths (Table 1). Using a logistic regression model, it is adjusted for several factors that affect in-hospital mortality, including age, sex, length of stay, diagnosis, admission category, comorbidity and transfer from another acute care institution. New methodology for calculating HSMR was introduced in February 2012: Starting in February 2012 (with the Q1 and Q2 2011–2012 release), HSMR results are calculated with an updated baseline using 2009–2010 data. The previous baseline was calculated using 2004–2005 data. The list of diagnosis groups included in the HSMR has been updated based on the latest mortality patterns. There are now 72 diagnosis groups, compared with 65 groups in the previous version of the methodology (refer Table 1). The Charlson Index Score algorithm has been updated based on the latest paper from H. Quan et al. In addition, based on feedback from HSMR users, codes that don’t exist in ICD-10CA have been removed and certain type 3 diagnoses (such as diabetes, cancer and asterisk codes) have been added to the algorithm. Diagnosis groups–based modelling has been used to calculate the All Cases HSMR; that is, a separate model has been fitted for each of the 72 diagnosis groups. The same adjustment factors were used in the models. This approach allows for more precise case-mix adjustment and drill-down analysis. The definition of medical and surgical patient populations has changed and is now based on CMG partition codes. Canadian Hospital Reporting Project (CHRP) peer groups have replaced HSMR-specific peer groups used in the previous version of the methodology. Case selection Following are the inclusion and exclusion criteria for case selection. Inclusion criteria 1. Discharge between April 1 of a given year and March 31 of the following year 2. Admission to an acute care institution 3. Discharge with diagnosis group of interest (that is, one of the diagnosis groups that account for approximately 80% of in-hospital deaths)( Table 1) 4. Age at admission between 29 days and 120 years 5. Sex recorded as male or female 6. Length of stay up to 365 consecutive days 7. Admission category is elective or emergent/urgent 8. Canadian resident Melbourne EpiCentre 7 Hospital Mortality Indicator (HMI) Review Exclusion criteria 1. Cadavers 2. Stillborns 3. Sign-outs (that is, discharged against medical advice) 4. Patients who did not return from a pass 5. Neonates (age of admission less than or equal to 28 days) 6. Records with brain death as most responsible diagnosis code 7. Records with palliative care as most responsible diagnosis code Table 1. Diagnosis groups that account for 80% of acute care in-hospital mortality (updated Feb 2012) Diagnosis Group Description Diagnosis Group Description A04 Other bacterial intestinal infections I62 Other nontraumatic intracranial haemorrhage A41 Sepsis I63 Cerebral infarction Malignant neoplasm of oesophagus I64 Stroke, not specified as haemorrhage or infarction C16 Malignant neoplasm of stomach I70 Atherosclerosis C18 Malignant neoplasm of colon I71 Aortic aneurism and dissection C22 Malignant neoplasm of liver and intrahepatic bile ducts J18 Pneumonia C25 Malignant neoplasm of pancreas J44 Other chronic obstructive pulmonary disease C34 Malignant neoplasm of bronchus and lung J69 Pneumonitis due to solids and liquids C50 Malignant neoplasm of breast J80 Adult respiratory distress syndrome C61 Malignant neoplasm of prostate J84 Other interstitial pulmonary diseases C67 Malignant neoplasm of bladder J90 Pleural effusion, not elsewhere classified C71 Malignant neoplasm of brain J96 Respiratory failure, not elsewhere classified C78 Secondary malignant neoplasm of respiratory and digestive organs K26 Duodenal ulcer C79 Secondary malignant neoplasm of other sites K55 Vascular disorders of intestine C80 Malignant neoplasm without specification of site K56 Paralytic ileus and intestinal obstruction without hernia C83 Diffuse non-Hodgkin’s lymphoma K57 Diverticular disease of intestine C85 Other and unspecified types of nonHodgkin’s lymphoma K63 Other diseases of intestine C90 Multiple myeloma and malignant plasma cell neoplasms K65 Peritonitis C92 Myeloid leukemia K70 Alcoholic liver disease E11 Diabetes mellitus type 2 K72 Hepatic failure C15 Melbourne EpiCentre 8 Hospital Mortality Indicator (HMI) Review Diagnosis Group Description Diagnosis Group Description E86 Volume depletion K74 Fibrosis and cirrhosis of liver E87 Other disorders of fluid, electrolyte and acid-base balance K85 Acute pancreatitis F03 Unspecified dementia K92 Other diseases of digestive system F05 Delirium, not induced by alcohol and other psychoactive substances L03 Cellulitis G30 Alzheimer’s disease N17 Acute renal failure G93 Other disorders of brain N18 Chronic renal failure I21 Acute myocardial infarction (AMI) N39 Other disorders of urinary system I24 Other acute ischemic heart diseases R53 Malaise and fatigue I25 Chronic ischemic heart disease R57 Shock, not elsewhere classified I26 Pulmonary embolism R64 Cachexia I35 Nonrheumatic aortic valve disorders S06 Intracranial injury I46 Cardiac arrest S32 Fracture of lumbar spine and pelvis I48 Atrial fibrillation and flutter S72 Fracture of femur I50 Heart failure T81 Complications of procedures, not elsewhere classified I60 Subarachnoid haemorrhage I61 Intracerebral haemorrhage T82 Complications of cardiac and vascular prosthetic devices, implants and grafts Z54 Convalescence Source: HSMR Technical Notes April 2013 To account for coding standards related to certain conditions and to ensure that diagnosis groups truly reflect the main reason for a patient’s stay in the hospital, the following steps are taken: 1. According to World Health Organization (WHO) guidelines for dagger/asterisk codes, the aetiology is coded as the most responsible diagnosis (MRDx) while the manifestation is coded as type 6. For patients with a type 6 coded on their discharge, the first three digits of the type 6 diagnosis determined the patient’s diagnosis group. 2. If a patient was admitted with an MRDx of coronary artery disease (I25.0, I25.1, I25.8 or I25.9) but also had an acute myocardial infarction (I21 or I22) as diagnosis type 1, W, X or Y and a revascularization procedure (1.IJ.76, 1.IJ.50, 1.IJ.57.GQ or 1.IJ.54.GQ-AZ), the patient’s diagnosis group was considered to be the acute myocardial infarction (that is, I21 group if the preadmission diagnosis starts with I21, or I22 group if the preadmission diagnosis starts with I22). Note that I22 is not one of the 72 diagnosis groups in the top 80% list. 3. If a patient was admitted with an MRDx of care involving use of rehabilitation procedures (Z50) and also had a stroke (I60 to I64) as diagnosis type 1, W, X or Y, the patient’s diagnosis group was considered to be the stroke. If a patient had more than one stroke, the stroke with the highest mortality is assigned. The order of strokes by increasing mortality is I60, I61, I63, I64, I62. Melbourne EpiCentre 9 Hospital Mortality Indicator (HMI) Review 4. If an acute lower respiratory tract infection (J10.0, J11.0, J12 to J16, J18 or J20 to J22) was coded as the MRDx and a patient also had chronic obstructive pulmonary disease (COPD) (J44), the patient’s diagnosis group was considered to be COPD. 5. All patients with pneumonia (J12 to J17) as the MRDx or type 6 (where the COPD rule mentioned above was not applied) were combined with the unspecified pneumonia (J18) diagnosis group to provide a more complete case selection of pneumonia patients, as specificity might not be available and/or accessible at the time of coding. 6. All patients with an MRDx of sepsis (A42.7, A22.7, A26.7, A28.2, A32.7, A39.2, A39.3, A40, A39.4, A21.7, B00.7, B37.7, A03.9, A02.1, A20.7, A23.9, A24.1, A28.0), who did not have type 6 diagnosis, were combined with the other sepsis (A41) diagnosis group to provide a more complete case selection of sepsis patients. Variations in coding of sepsis exist across the country, and specificity might not be available and/or accessible at the time of coding. Patients with a type 6 diagnosis and sepsis were assigned to the diagnosis group according to their type 6 diagnosis. 7. All patients with an MRDx of concussion (S06.0) were removed from the intracranial injury (S06) diagnosis group. Within this group, concussion accounts for a large number of cases but very few deaths. The remaining cases represent more severe brain traumas and are largely responsible for the high mortality within this diagnosis group. Logistic regression model For each of the HSMR diagnosis groups, the HSMR logistic regression models are fitted with age, sex, length-of-stay (LOS) group, admission category, comorbidity group and transfers as independent variables. The models are based on data from all acute hospitals in Canada. The Charlson Index The Charlson Index is an overall comorbidity score that has been shown to be highly associated with mortality and has been widely used in clinical research. Based on Quan’s updated methodology (Quan et al., 2011), the comorbid conditions below (Table 2) are used to calculate the Charlson Index score for each record. Conditions within each group are counted only once (for example, if I43 and I50 appear on the abstract, the score will be 2). If conditions from different groups are present on the abstract, their weights will be summed (for example, if I50 and F00 are present on the abstract, the score will be 4). Diagnosis types 1, W, X and Y are used to calculate the Charlson score. Starting in February 2012, type 3 codes for the following conditions are also included (to account for coding and classification standards): Asterisk codes (coded at the second position in the abstract): I43, F00, F02, M360, I982 Diabetes codes in the ‘Diabetes with complications’ group: E102, E103, E104, E105, E107, E112, E113, E114, E115, E117, E132, E133, E134, E135, E137, E142, E143, E144, E145, E147 Cancer and metastatic carcinoma codes, when a patient’s diagnosis group is not cancer (that is, does not start with ‘C’) Melbourne EpiCentre 10 Hospital Mortality Indicator (HMI) Review Table 2. Comorbid conditions used to calculate Charlson score Source: Technical specifications HSMR April 2013 The following exclusions are applied: For cases without a type 6 diagnosis code: If a patient had a qualifying Charlson diagnosis code as type 1, W, X, Y or 3 (for selected cases), and this same code also appeared as the MRDx or type 2, then this type 1, W, X, Y or 3 code was not included in the Charlson calculation. For cases with a type 6 diagnosis code: The original type 6 code is not included in the Charlson calculation. o The original MRDx is included in the Charlson calculation if this diagnosis code is not also a type 2 code. o If a patient had a qualifying Charlson diagnosis code as type 1, W, X, Y or 3 (for selected cases), and this same code also appeared as type 6 or type 2, then this type 1, W, X, Y or 3 code was not included in the Charlson calculation. For all cases: When the MRDx was not used to determine a diagnosis group (see Appendix I for examples), the diagnosis used to assign the diagnosis group was not counted in the Charlson calculation. For example, if a patient had an MRDx of care involving use of rehabilitation procedures (Z50) and also had a stroke (I61) as a preadmission diagnosis, the diagnosis group would be I61 for the HSMR calculation. Accordingly, the I61 diagnosis was not included in the Charlson Index score calculation. Coefficients derived from the logistic regression models are used to calculate the probability of inhospital death (these are not available publicly, but on request from participating hospitals). The expected number of deaths for a hospital, corporation or region is based on the sum of the Melbourne EpiCentre 11 Hospital Mortality Indicator (HMI) Review probabilities of in-hospital death for eligible discharges from that organisation. The 95% confidence interval is calculated using Byar’s approximation. Figure 4. Flow chart for assignment of Charlson Score Source: HSMR Technical Notes April 2013 Interpretation and reporting A ratio equal to 100 suggests that there is no difference between a local mortality rate and the average national experience, given the types of patients cared for. An HSMR greater or less than 100 suggests that a local mortality rate is higher or lower, respectively, than the national experience. The confidence intervals describe the precision of the HSMR estimate. The upper and lower confidence intervals are estimated to contain the true value of the HSMR 19 times out of 20 (95% confidence interval). A confidence interval that includes 100 suggests that the HSMR is not statistically different from the 2009–2010 baseline of 100. HSMR results whose confidence interval does not include 100 and are therefore statistically different from the 2009–2010 baseline are denoted with a symbol in the report. A symbol (§) identifies when the HSMR result is statistically different from the 2009–2010 baseline of 100 (p<0.05). Melbourne EpiCentre 12 Hospital Mortality Indicator (HMI) Review While the HSMR takes into consideration many of the factors associated with the risk of dying, it cannot adjust for every factor. Therefore, the HSMR is most useful to individual hospitals to track their own mortality trends. The HSMR can be used to track the overall change in mortality resulting from a broad range of factors, including changes in the quality and safety of care delivered. It is important to note that the HSMR is not designed for comparisons between hospitals. Hospital reporting Quarterly and annual HSMR reports are available through the HSMR eReporting service to hospitals (not to the public). The HSMR eReporting service is a secure web-based method (accessed on the web through CIHI Client Services) for viewing confidential quarterly and annual HSMR results. Compared to previous confidential PDF HSMR reports, it has added features and functionality, including the capacity to conduct drill-down analyses to assist with interpretation of HSMR results and trends. This service is available to hospital or health region CEOs, as well as additional persons that the CEO or other senior executive has designated to receive the report. The HSMR eReporting application is updated with new data as follows: Q1 and Q2—February, Q3— April, Q4 and annual—September. A notification email is sent to registered users when the updated reports are available. The HSMR eReporting Service includes four dashboards, with the level of detail increasing from the first to the fourth. The HSMR launch page (dashboard 1) provides an overview of the most recent HSMR quarterly and year-to-date results, as well as previous HSMR results, for monitoring mortality trends. It also provides a set of links to important HSMR information and support. HSMR detailed results (dashboard 2) provides detailed results and program-specific HSMR results for medical cases, surgical cases (except peer group H3 hospitals), ICU cases (except peer group H3 hospitals) and excluding transfers in or out cases. HSMR leading diagnosis (dashboard 3) provides information about the leading diagnosis groups and diagnosis categories contributing to an organization’s HSMR results by number of cases, number of deaths, observed deaths greater than expected deaths and observed deaths less than expected deaths. HSMR mortality by diagnosis (dashboard 4) provides crude and expected mortality rates by HSMR diagnosis group or category and enables organizations to explore changes over time. The HSMR eReporting service also provides the ability to define specific parameters and to generate a customized report. In addition to quarterly reports, monthly electronic Cumulative Hospital Standardized Mortality Ratio (eHSMR) Reports are available through the electronic Hospital Specific Reporting (eHSR) system in PDF and tab-delimited ASCII format (for organizations outside Quebec). These reports are part of the standard monthly reports that are generated once data for a reporting period has been submitted to CIHI (within three to five days). A separate report is produced for each participating hospital. Corporation- and regional-level reports are not available through eHSMR but can be calculated using the monthly data from individual facilities. While the hospital reports were not available for this review, the information sheet opposite highlights the main aspects of the visual reporting. Melbourne EpiCentre 13 Hospital Mortality Indicator (HMI) Review Figure 5. Information sheet for HSMR reporting Source: Canadian Institute for Health Information website Melbourne EpiCentre 14 Hospital Mortality Indicator (HMI) Review Public reporting HSMR results have been released publicly in Canada since 2007. The 2012 HSMR public release is the sixth release of HSMRs for eligible hospitals and regions across Canada. The current release includes HSMR results for 2007–2008 to 2011–2012. Results are only reported for regions and acute care facilities that meet a statistical threshold for public reporting: at least 2,500 qualifying discharges in each of the last three years being reported i.e. 2009–2010, 2010–2011 and 2011–2012. If an organization meets the reporting threshold but had fewer than 2,500 discharges in earlier years, results for these years are suppressed. The public may access HSMR results for a health region, hospital corporation or hospital using the map (pictured below). Provinces shaded in green have HSMR results for those organisations that met the reporting threshold; provinces or territories shaded in yellow do not have HSMR results, as no organizations met the reporting threshold. Regional results are reported using the boundaries in effect as of March 31, 2012. Users may navigate between provinces by clicking on the map or on one of the tabs above the map of Canada. Once a province is selected, users may access results for selected regions and hospitals. To see regional results, users may click on the region of interest on the map, select the name of the region from the list beneath the map or select a region from the drop-down list at the top of the provincial page. To see hospital results, select the name of the hospital from the list of hospitals within a region or select the hospital from the drop-down list at the top of the page. Users may navigate within a province using the provincial map or the drop-down boxes at the top of the page. Because of the new reference year, the results for 2007-2010 are higher compared to the results that were provided in last year’s public release. For example, if an organisation’s HSMR for 2010-2011 was 96 previously, in the 2012 public release the same organization’s HSMR result for 2010-2011 may be 106. This change is caused by the new methodology and the new reference year. Despite the changes in methodology, the HSMR trends have remained similar for the majority of organisations. For example, if an organisation’s HSMR results over time were declining with the old methodology, it is likely that the result trend will still be declining with the new methodology. Therefore, CIHI recommends users focus on result changes over time. Five years of results using the new methodology are provided to allow for this trending. In addition to the interactive public reporting functions, the CIHI website includes various information pieces and press releases which provide interpretations of the HSMR data. Figure 8 and Figure 7 show some examples of how the data is communicated to the public. Melbourne EpiCentre 15 Hospital Mortality Indicator (HMI) Review Figure 6. Examples of public reporting for HSMR Select a province to view results for a region or hospital: B.C. | Alta. | Sask. | Man. | Ont. | Que. | N.B. | N.S. | P.E.I. | N.L. HSMR Hospital Results British Columbia Introduction | Methodology | Results Select hospital—British Columbia or Select region—British Columbia Lions Gate Hospital Back to British Columbia map Back to Regional map Community name: North Vancouver HSMR HSMR 95% CI 2007–2008 88 § 78–98 2008–2009 86 § 76–96 2009–2010 79 § 70–89 2010–2011 79 § 70–89 2011–2012 71 § 63–80 Notes: 95% CI 95 percent confidence interval. § Significantly different from the fiscal year 2009–2010 baseline HSMR of 100. Source: Canadian Institute of Health Information website Melbourne EpiCentre 16 Hospital Mortality Indicator (HMI) Review Figure 7. Examples of public information releases in relation to HSMR results This year’s HSMR numbers (2010) This year’s HSMR report includes five years of HSMR results (from 2004-2005 to 2008-2009) for 75 hospitals and 38 health regions across Canada, excluding Quebec. The HSMR compares the actual number of deaths in a hospital or region with the average Canadian experience, after adjusting for several factors that may affect in-hospital mortality rates, such as the age, sex, diagnoses and admission status of patients. An HSMR equal to 100 suggests that there is no difference between a local mortality rate and the average national experience, given the types of patients cared for. An HSMR greater or less than 100 suggests that a local mortality rate is higher or lower than the national experience, respectively. ‘One of the main objectives of the HSMR is to provide hospitals with a tool that demonstrates mortality trends because this information can help hospitals monitor changes over time and identify the strategies that work best in lowering their mortality rates,’ explains Dr. Eugene Wen, Manager of Health Indicators at CIHI. Comparing this year’s numbers to previous years demonstrates that more Canadian hospitals now have HSMRs below 100. The proportion of hospitals whose HSMR is significantly below 100 is 47%, compared to 36% in 2008. Focused care and attention on the issues can result in care that is more appropriate (2010) Today, compared with only a few years ago, more is known about the care and treatment of patients with heart attacks in Canada. This has led to decreases in heart attack mortality, hospitalizations and readmissions. CIHI’s report shows that between 2004–2005 and 2008–2009, the age-adjusted rate of hospitalization for new heart attacks dropped from 239 per 100,000 people to 217 per 100,000, despite rising rates of risk factors in Canada, such as obesity and high blood pressure. Progress is also being made across Canada in reducing hospital mortality rates overall. Today, CIHI is releasing its annual hospital standardized mortality ratio (HSMR) results for large acute care facilities and health regions outside Quebec. The HSMR is a measure of quality of care that compares a hospital’s actual (observed) deaths with the number of expected deaths based on the types of patients a hospital sees. CIHI’s report shows that over the past five years, 81% of reportable hospitals saw some decrease in their HSMRs, with 40% experiencing significant decreases. ‘What we’ve seen across Canada is that the HSMR has been a great motivator for change,’ says Patti Cochrane, Vice President, Patient Services and Quality and Chief Nursing Officer, Trillium Health Centre. ‘Trillium has been able to use HSMR results to understand what is driving mortality rates and where improvements can be made. As a result, we’ve made tangible changes to the care we provide, making a real difference to patient outcomes and potentially saving lives.’ Hospital death rates decreasing (December 2012) Fewer people are dying in Canada’s acute care facilities after admission, according to the latest hospital mortality data from the Canadian Institute for Health Information (CIHI). The hospital standardized mortality ratio (HSMR) compares the number of deaths in a hospital with the 2009–2010 national average (after adjusting for differences in the types of patients in a facility), which is represented by the number 100. A score greater than 100 suggests an above-average rate, while a score lower than 100 suggests a below-average rate. The HSMR is a performance indicator that hospitals in Canada have been using for many years to monitor changes over time and identify areas for improvement. The data shows that patient care and quality are improving across the country. The HSMR decreased for 16% of participating hospitals outside Quebec in 2011–2012, compared with 2010–2011. Of these 82 facilities 4 hospitals had an HSMR significantly above 100; 42 hospitals had an HSMR significantly below 100; and 36 hospitals had an HSMR not significantly different from 100. Quebec data was included in the HSMR for the first time this year. Four years of data for Quebec regions and facilities have been provided, with 2010–2011 being the latest year available. Melbourne EpiCentre 17 Hospital Mortality Indicator (HMI) Review Figure 8. Example of public presentation of HSMR Note: Results are presented for reportable hospitals only. Source: Discharge Abstract Database, Canadian Institute for Health Information. Issues and experiences A number of authors have argued the pros and cons of the HSMR since the introduction of this indicator in Canada. Shortly after the introduction of the indicator, an entire issue was devoted to examining the validity of the measure and the implications for Canadian hospitals. The lead article by Penfold et al (2008) questions the tool’s ease of use and whether or not it is consistent across different facilities. They note that the goal of HSMR is to reduce ‘preventable’ deaths by motivating hospitals to examine and reduce in-hospital deaths, but note lack of empirical evidence in support of this. Other authors however, while accepting the limitations of the tool, focus on its usefulness in tracking improvement initiatives over time, and its contribution to developing a culture of patient safety and quality improvement. The need to better educate the public to facilitate a more accurate interpretation of the data was also highlighted, and the need to recognise that the data is not intended for comparison between facilities. More recently in a paper by Chong (2012), a limitation in relation to coding is discussed in terms of a potential impact on HSMR trends. Chong reviewed the coding of palliative care since the release of public reporting of HSMR and found that them to have increased dramatically, a factor that may contribute to the observed national decline in HSMR. Downer (2010) also examined the use of the palliative care flag. The authors conclude that patient prognostication is notoriously difficult, with a high degree of variability between patients. It is also harder to prognosticate for some end stage conditions (e.g., heart failure, chronic obstructive lung disease) than others (e.g., cancer).’ Patients admitted solely for palliative care vs patients’ status changes to palliative during the episode of care. Concerns re ‘overusing’ the palliative status flag, may risk missing important change in patient outcomes that might indicate a problem with quality of care. Melbourne EpiCentre 18 Hospital Mortality Indicator (HMI) Review A number of other limitations were noted regarding the HSMR: Most deaths are not preventable, and most quality-related problems do not cause death. The most important determinants of mortality are non-modifiable (e.g. age, gender, comorbidities) and the HSMR cannot correct for unknown or unmeasured risk factors. The process of standardisation can introduce bias via the constant risk fallacy, inconsistent measurement of risk factors, variable admission thresholds, and variable access to rehabilitation and long-term care facilities. Popowich (2011) outline use of mortality data in two acute hospitals in Edmonton. The narrative report describes the development of a quality improvement initiative ‘If high, why?’ using raw data (via CIHI portal) to identify target areas, random sample of patient charts for review and more extensive peer review using Healthcare Improvement (IHI) Global trigger Tools (GTT). The initial development of the initiative resulted in the introduction of the ‘Safer HealthCare Now’ bundles of care. A key component of the initiative is the HSMR committee, which supports a standardised process for the analysis and review of the raw monthly mortality data and subsequently, the introduction local improvements. The paper reflects a practical approach for investigating and using the HSMR for quality improvement purposes, including a flow chart. The approach is aimed at minimising unnecessary chart review, however, there is no discussion regarding the resource burden associated with the process or the effect of the initiatives on the HSMRs or overall quality. HSMR resources and information Below are direct links to further sources of information on the CHI website Reports and analyses about HSMR See the 2012 HSMR results In Focus: A National Look at Sepsis (Dec. 2009) HSMR: A New Approach for Measuring Hospital Mortality Trends in Canada (Nov. 2007) HSMR resources Understanding the HSMR Report (updated March 2009) (PDF, 304 KB) What Is the HSMR? (updated July 2008) (PDF, 274 KB) Technical Notes (updated Apr. 2013) (PDF, 251 KB) Frequently Asked Questions for Hospitals and Health Providers (updated Apr. 2013) (PDF, 167 KB) Using CIHI’s HSMR eReporting Service (updated May 2010) (PDF, 52 KB) Resources for Getting Started (PDF, 227 KB) Key projects about HSMR HSMR public release 2012 (updated Sept. 2012) HSMR public release 2011 (updated Sept. 2011) HSMR eReporting service launched (updated May 2010) Melbourne EpiCentre 19 Hospital Mortality Indicator (HMI) Review 1.3 Canadian Hospital Reporting Project (CHRP) In line with the identified priorities of providing tailored information and analytical tools, the CIHI has developed the Canadian Hospital Reporting Project (CHRP) to create a set of standardised, comparable hospital performance indicators across participating jurisdictions. CHRP is a national quality improvement initiative featuring an interactive tool which gives hospital decision-makers, policy-makers and Canadians access to results for more than 600 facilities from every province and territory in Canada (representing 95% of Canadian hospitals). The interactive tool is designed to foster quality improvement and learning about hospital performance. The objectives of the project are to: Provide comparable indicators to support performance measurement and quality improvement among Canadian hospitals Help senior executives and board members with strategic planning and priority-setting Enable quality improvement managers to monitor improvements and outcomes that are related to specific quality initiatives and to trend hospital performance over time, and Enable hospitals to compare themselves with other hospitals in their category (for example, teaching, community or small), against the provincial average, within their regional health authority and across jurisdictions. Following initial release of data to participating hospitals via the online eTool in 2010, the March 2013 release includes data for 21 clinical indicators and 6 financial indicators. The indicators These indicators have been chosen for CHRP from the overall indicator set, based on their relevance to performance measurement and quality improvement. The selected indicators measure: clinical effectiveness, patient safety, appropriateness of care, accessibility and financial performance. Two indicators relevant to this review are included in the domain of clinical effectiveness: 30 day in-hospital mortality for AMI 30 day in-hospital mortality for stroke To determine which clinical indicators were most relevant to performance measurement and quality improvement among Canadian hospitals, CIHI’s Hospital Reports team conducted an extensive review of existing indicators from peer-reviewed literature. From this review, and with input from experts in the field, potential indicators were selected according to specified criteria: feasibility, scientific soundness, relevance and whether they are amenable to intervention and improvement by hospitals. The data is taken from the CHHI Discharge Abstract Database (DAD) and National Ambulatory Care Reporting System (NACRS). When a patient is discharged from a hospital, information relating to his or her care is recorded on an abstract and electronically submitted to CIHI. CIHI then uses some of this information to assign inpatients to major clinical categories (MCC) as well as to a specific case mix groups (CMG). CMGs and MCCs are then used to group patients with similar clinical characteristics. 30 day in-hospital mortality for AMI Included populations Cases are included where an in-hospital death occurred within 30 days of the AMI admission. Melbourne EpiCentre 20 Hospital Mortality Indicator (HMI) Review Inclusion criteria: 1. Admission Category Code = U (urgent / emergency) AND 2. Facility Type Code = 1 (acute care) AND 3. Admission date = April 1 to March 1 AND a) AMI (ICD-10-CA: I21.^ or I22.^) is coded as diagnosis type M but not also as type 2; OR b) Where another diagnosis is coded as type M and also as type 2, and a diagnosis of AMI is coded as type 1 (or type W, X or Y but not also as type 2); OR c) Coronary artery disease (ICD-10-CA: I25.0, I25.1^, I25.8 or I25.9) is coded as type M and AMI is coded as type 1 or type W, X or Y but not also as a type 2 AND 4. A revascularization procedure is coded: Percutaneous coronary intervention (CCI: 1.IJ.50^^, 1.IJ.57.GQ^^ or 1.IJ.54.GQ.AZ*) or 5. Coronary artery bypass (CCI: 1.IJ.76^^) Exclusion Criteria: 1. AMI admissions (ICD-10-CA: I21.^ or I22.^ as a diagnosis type M, 1, 2, W, X or Y in the 12 months preceding the admission date on the index AMI record 2. Age (in years) associated with index AMI record ≤19 3. Refer to Section 5: Identifying Acute Care and Day 4. Procedure Data— 30 day in-hospital mortality for stroke Included populations Cases are included where an in-hospital death occurred within 30 days of the stroke admission. Inclusion criteria: Urgent inpatient admission for first stroke during the first 11 months of the fiscal year Admission Category Code = U AND Facility Type Code = 1 (acute care) Melbourne EpiCentre 21 Hospital Mortality Indicator (HMI) Review AND Admission date = April 1 to March 1 AND a) Stroke (ICD-10-CA: I60–I64) is coded as diagnosis type M but not also as type 2; OR b) Where another diagnosis is coded as type M and also type 2, a diagnosis of stroke is coded as type 1 (or type W, X or Y but not also as type 2); OR c) Rehabilitation (ICD-10-CA: Z50.1 or Z50.4–Z50.9) is coded as type M and stroke is coded as type 1 (or type W, X or Y but not also as type 2) Exclusion Criteria: 1. Previous stroke in the last 12 months 2. Patients age 19 and younger at admission 3. Non-clinical criteria (as above) Risk adjustment Statistical regression modelling is used to risk-adjust patient characteristics. Risk factors controlled for include age, gender and selected pre-admit comorbid diagnoses applicable to the indicator. For AMI mortality these include cancer, diabetes with complications, shock, renal failure, cerebrovascular disease, heart failure and pulmonary oedema (Table 3). For stroke mortality these include cancer, shock, heart failure, pulmonary oedema, ischaemic heart disease (acute, chronic), renal failure, liver disease, other unspecified intracranial haemorrhage, intracerebral haemorrhage or infarction and subarachnoid haemorrhage (Table 4): Table 3. Risk adjustment coefficients for 30-day in hospital mortality for AMI Table 4. Risk adjustment coefficients for 30-day in hospital mortality for stroke Source: Canadian Hospital Reporting Project – Clinical Indicators Risk Adjustment Tables March 2013 Melbourne EpiCentre 22 Hospital Mortality Indicator (HMI) Review The selected risk factors were identified based on a literature review, clinical evidence and expert group consultations using the principles of appropriateness, viability (that is, sufficient number of events) and data availability. Risk factors must be listed as significant pre-admit conditions on the patient’s abstract for them to be identified for risk adjustment. Coefficients derived from the regression models are used to calculate the probability of an outcome for each denominator case; these are then summed for each hospital (or for other reporting levels such as regions, provinces and peer groups) to calculate the expected number of cases of each outcome. The risk-adjusted rate is calculated by dividing the observed number of cases by the expected number of cases and multiplying by the Canadian average. Coefficients derived from regression models used data from each fiscal year to obtain the expected number of cases. The Canada average is the standard population rate, or the Canadian average rate for all provinces and territories for each fiscal year (total number of numerator cases nationally divided by the total number of denominator cases nationally multiplied by 100 if the indicator is expressed as a rate per 100 or multiplied by 1000 if the indicator is expressed as a rate per 1000). In addition, 95% confidence interval (CI) limits for the risk-adjusted rates are calculated using Daly’s formula for exact Poisson confidence limits. The formulas for calculating the lower (LCL) and upper (UCL) limits of the 95% CIs are shown below. Calculate lower and upper confidence limit numerator: LCLnumerator = GAMINV (0.025, numerator) UCLnumerator = GAMINV (0.975, numerator + 1) Calculate the confidence limits for an adjusted rate: LCLadjusted rate = (LCLnumerator ⁄ expected) × Canada average UCLadjusted rate = (UCLnumerator ⁄ expected) × Canada average Where Numerator = the observed number of event cases for a given reporting level (for example, hospital site, hospital, peer group, province, region or peer group by province) Expected = the expected value of the event cases Canada average = the standard population rate, or the Canadian average rate for all provinces and territories for each fiscal year The Poisson method uses the inverse gamma function in its calculation. If the UCL is greater than the maximum possible value of the indicator, the UCL is reset to be equal to that maximum possible value. That is, for an indicator that is calculated per 100 cases, a UCL greater than 100 is reset to 100, and for an indicator that is calculated per 1,000 cases, a UCL greater than 1,000 is reset to 1,000. Also, if there is a risk-adjusted rate equal to 0, the associated LCL is also set to 0. Confidence intervals are provided to aid interpretation. The width of the confidence interval illustrates the degree of variability associated with the rate. Indicator values are estimated to be accurate within the upper and lower confidence interval 19 times out of 20 (95% confidence interval). Risk-adjusted rates with confidence intervals that do not contain the Canada average can be considered statistically different from the Canada average. Melbourne EpiCentre 23 Hospital Mortality Indicator (HMI) Review Risk-adjusted rates are calculated at the hospital, health administration region and provincial/ territorial levels. Regional and provincial risk-adjusted rates are aggregated hospital-level data. The expected performance level of an institution in this indirect method of standardization of riskadjustment is based on how all institutions perform, because the number of expected cases is calculated based on regression models fitted on all cases from all hospitals. Furthermore, riskadjustment modelling cannot entirely eliminate differences in patient characteristics among hospitals, because not all pre-admission influences are adjusted for; if left unadjusted for (due to reasons such as viability), hospitals with the sickest patients or that treat rare or highly specialized groups of patients could still score poorly. Finally, when interpreting risk-adjusted results, it is recommended that the hospital’s result be compared with the Canada average. CHRP analytics and reporting CHRP’s analytical highlights include reports that highlight notable trends, variations and comparisons across hospitals, peer groups and jurisdictions over time. The reports help foster peer learning by identifying other hospitals to learn from in a selected group of indicators. CIHI classifies hospitals into groups of facilities that have similar structural and patient characteristics (peer groups). This enables hospitals to compare themselves with others that treat similar patient populations. Based on 2008–2009 data, CIHI assigned hospitals to one of four peer groups: Teaching (T); Community Large (H1); Community Medium (H2); Community Small (H3). Hospital Results: Hospitals access online reporting functions via a hospital-only site. Details about the use of the webbased tool are included in the CHRP Tool Guidebook with some examples of screen shots shown in Figure 9. The publicly available function provides hospital-level results for each indicator over multiple years. The tool’s interactive functionality allows users to examine performance across years and compare regional, provincial/territorial and national results with those of their peers. Contextual information, such as community profiles and hospital-level characteristics, is available to help users interpret the results and identify peer hospitals. The interactive nature of the tool means that users can focus on the information that is most relevant to them. For example, policy- and decision-makers can track the success of improvement initiatives, while physicians and health care professionals can compare their results with the practices and performance of peer hospitals across the country. The tool also has interactive maps to help users visualize, interpret and compare the information provided. Maps show a snapshot of performance across the country and can be a starting point for further investigation. They are also useful for those who are specifically interested in identifying performance trends related to population density or geography. Key Findings: This publicly accessible function provides a summary of results for selected clinical and financial indicators, and highlights notable trends and interesting results. Results are accessible to the public and are presently included for 30-Day Readmission, 30-Day In-Hospital Mortality Following Acute Myocardial Infarction (AMI), Administrative Services as a Percentage of Total Expense, Cost per Weighted Case (Figure 10) Melbourne EpiCentre 24 Hospital Mortality Indicator (HMI) Review Clinical Performance Allocation: This function provides a performance report for a selected facility. It aims to help hospitals identify others they can learn from and highlights performance results by assigning performance categories to seven indicators from the Clinical Effectiveness domain, including 30-day mortality for AMI and stroke. The performance range for each indicator is defined as the range between the performance average (the average of three years (2007–2008, 2008–2009 and 2009–2010)) of data for all hospitals for the indicator and the performance 75th percentile (top 25th percentile indicator rates for three years (2007–2008, 2008–2009 and 2009–2010)) for all stable rate hospital for the indicator (Figure 11). For each of the seven clinical indicators, facilities are categorised as being below, within or above the performance range for the two most recent years. This snapshot assists users in identifying areas where improvement is needed. It also increases collaboration and track the success of improvement initiatives. Clinical performance allocation results for 2011–2012 are not available. Performance allocation is completed within and across fiscal years. The method was developed to suit both stable and low volume rates. Hospitals’ risk-adjusted rates for each indicator were compared with a performance range for each indicator. The above performance range suggests better performance. The performance range for each indicator was defined as the range between the performance average and the performance 75th percentile. For all indicators to which performance allocation is applied, a lower rate is more desirable. A lower value on these indicators suggests better performance. Once a performance indicator is selected on the web-tool, the scatterplot graph displays the performance of all facilities in the selected province for the selected indicator. Financial Indicator Trending: This function provides insight into the operation of health service organizations. Users can examine trends over time and see how these compare with the national average. Further details about the analysis for developing these reports is included in the following documents: CHRP Key resources and Toolkit Key resources CHRP Technical Notes—Clinical Indicators (PDF, 1,305 KB) CHRP Technical Notes—Risk-adjustment Tables (PDF, 522 KB) CHRP Technical Notes—Financial Indicators (PDF, 534 KB) CHRP Tool Guidebook (PDF, 2,160 KB) CHRP Toolkit CHRP Frequently Asked Questions (FAQ) (PDF, 374 KB) CHRP Key Messages (PDF, 349 KB) CHRP Interpretation Notes (PDF, 570 KB) Melbourne EpiCentre 25 Hospital Mortality Indicator (HMI) Review CHRP Interpreting Indicator Results (PDF, 590 KB) CHRP Clinical Indicators Quick Reference (PDF, 263 KB) CHRP Financial Indicators Quick Reference (PDF, 160 KB) Indicator Development at a Glance (PDF, 162 KB) Performance Allocation at a Glance (PDF, 328 KB) Peer Grouping at a Glance (PDF, 272 KB) Community and Facility Profiles at a Glance (PDF, 321 KB) Melbourne EpiCentre 26 Hospital Mortality Indicator (HMI) Review Figure 9. (i) Screen captures of hospital reporting of indicators – hospital reporting portal CHRP Entry page (ii) Mapping page Melbourne EpiCentre 27 Hospital Mortality Indicator (HMI) Review (iii) Peer group report (iv) Facility snapshot Source: CHRP Tool Guidebook Melbourne EpiCentre 28 Hospital Mortality Indicator (HMI) Review Figure 10. Public reported Key Findings Source: CHRP website Melbourne EpiCentre 29 Hospital Mortality Indicator (HMI) Review Figure 11. Public reported Clinical Performance Allocation Source: CHRP website Melbourne EpiCentre 30 Hospital Mortality Indicator (HMI) Review Case Study United Kingdom (England, Scotland, Wales) 2.1 Background The new NHS Outcomes Framework - England In 2010 the white paper Liberating the NHS set out a vision for an NHS that achieves health outcomes that are among the best in the world. To achieve this, it outlined two major shifts: a move away from centrally driven process targets which get in the way of patient care; and a relentless focus on delivering the outcomes that matter most to people. The cornerstone to achieving this shift has been the development of an NHS Outcomes Framework which provides national level accountability for the outcomes that the NHS delivers. Its purpose is threefold: to provide a national level overview of how well the NHS is performing, wherever possible in an international context; to provide an accountability mechanism between the Secretary of State for Health and the NHS Commissioning Board; and to act as a catalyst for driving quality improvement and outcome measurement throughout the NHS by encouraging a change in culture and behaviour, including a renewed focus on tackling inequalities in outcomes. The framework defines five domains of focus in terms of accountabilities (Figure 12) which relate to the three parts of the definition of quality (effectiveness, patient experience and safety – Figure 13). It also comprises ten overarching indicators, covering the broad aims for each domain, and thirty one improvement areas, looking in more details at key areas within each domain. Figure 12. The quality improvement system in NHS Melbourne EpiCentre 31 Hospital Mortality Indicator (HMI) Review Figure 13. Domains of the framework as they relate to the aspects of quality improvement Source: NHS Outcomes Framework Relevant to this review is the Domain 1, Preventing people from dying prematurely, for which the overarching indicator relates to mortality due to causes amenable to healthcare. Rather than singling out several conditions where prevalence is particularly high, the Secretary of State for Health tracks the progress of the NHS in reducing mortality across the full spectrum of causes considered amenable to healthcare. This approach requires the definition of amenable mortality to be up to date in terms of the capabilities of current interventions. The improvement areas identified for Domain 1 are shown in Figure 14 and include reducing premature mortality from major causes of death such as cardiovascular disease and respiratory disease. In choosing the improvement areas, the conditions selected are those where it is clear that healthcare or public health interventions alone will not produce the required improvements in premature mortality. Figure 14. Domain 1 indicators Source: NHS Outcomes Framework Melbourne EpiCentre 32 Hospital Mortality Indicator (HMI) Review While the mortality indicators relating directly to the framework are high level population based indicators (e.g. Potential years of life lost due to causes amenable to healthcare), the domains relate more broadly to other measures, including hospital mortality measures, such as those reported through Quality Accounts (see below). The new Healthcare Quality Strategy - Scotland At around about the same time as England was developing the above framework, Scotland was also focussing on a high level strategy to achieve improvements in healthcare and patient safety. The Healthcare Quality Strategy for Scotland describes the approach and shared focus to realise the 20:20 Vision for Health and Social Care. The Quality Strategy aims to deliver the highest quality healthcare to the people of Scotland to ensure that the NHS, Local Authorities and the Third Sector work together, and with patients, carers and the public, towards a shared goal of world-leading healthcare. A Route Map has been designed to retain focus on improving quality and to make measureable progress to the 2020 Vision. It describes 12 priority areas for action for pursing the 2020 Vision for high quality sustainable health and social care services in Scotland. Based on the Institute of Medicine’s six dimensions of quality and informed by inputs from Scottish consumers in terms of their expectations for the health system (Caring, Compassionate, Communication, Collaboration, Clean environment, Continuity of care and Clinical excellence), three Quality Ambitions were developed: Safe - There will be no avoidable injury or harm to people from healthcare, and an appropriate, clean and safe environment will be provided for the delivery of healthcare services at all time. Person-Centred – There will be mutually beneficial partnerships between patients, their families and those delivering healthcare services which respect individual needs and values and which demonstrates compassion, continuity, clear communication and shared decisionmaking. Effective - The most appropriate treatments, interventions, support and services will be provided at the right time to everyone who will benefit, and wasteful or harmful variation will be eradicated The strategy recognises the importance of measuring quality, and sets out a Quality Measurement Framework (QMF).This framework describes three levels nationally, which provide a basic structure for thinking about the intended use of sets of indicators. The Framework provides a structure for aligning the wide range of measurement that goes on across the NHS in Scotland for different purposes, describing how measurement helps to drive progress towards the Quality Ambitions and providing the ability to demonstrate improvement both locally and nationally. The three levels described by the framework provide a structure for thinking about the intended use of sets of indicators. Melbourne EpiCentre 33 Hospital Mortality Indicator (HMI) Review Figure 15. Quality Measurement Framework, NHS Scotland Source: Quality Measurement Framework, Scotland NHS Level 1 Level 1 of the framework describes a set of high level indicators - the Quality Outcome Indicators which are intended to show progress towards the Quality Ambitions and Outcomes. They are expected to require long time-scales to demonstrate change, and the indicators are mostly updated annually or every two years. Six healthcare Quality Outcomes provide a description of the priority areas for improvement in support of the Quality Ambitions. These Quality Outcomes provide a context for partnership discussions about local and national priority areas for action. The six healthcare Quality Outcomes are: Everyone gets the best start in life, and is able to live a longer, healthier life People are able to live well at home or in the community Healthcare is safe for every person, every time Everyone has a positive experience of healthcare Staff feel supported and engaged The best use is made of available resources The 12 indicators include the Hospital Standardised Mortality Ratio and are: Healthcare experience Staff engagement Healthcare Associated Infection (HAI) Emergency admission rate/bed days Adverse events Hospital Standardised Mortality Ratio Under 75 mortality rate Patient Reported Outcome Measure Melbourne EpiCentre 34 Hospital Mortality Indicator (HMI) Review Self-assessed general health Percentage of time in the last 6 months of life spent at home or in a community setting Appropriate birth weight for Gestational Age (AGA) Resource use indicator Level 2 Level 2 relates to the performance management of NHS Boards, and consists of specific agreed targets (ie HEAT) to provide a focus for improvement. These are short term targets (1-3 years). Level 3 There are many existing national and local measurement systems to measure and/or drive improvement across health and care, all of which can be considered as part of Level 3. Some examples of national quality measures can be found through the Quality Indicators pages. The Quality Improvement Hub provides information about local measurement for improvement. A further strategy relevant to this review is the Scottish Patient Safety Programme (SPSP) which was established in 2009 with the overall aim of reducing hospital mortality by 15% by 2012. This was then extended to a 20% reduction by December 2015. Since December 2009, Information Services Division (ISD) has produced quarterly hospital standardised mortality ratios (HSMR) for all Scottish hospitals participating in the SPSP. HSMRs are provided to enable these acute hospitals to monitor their progress on reducing hospital mortality over time. The HSMR is included within a suite of 12 national quality outcome measures that were developed to monitor progress on achieving the ambitions. Public inquiries as drivers for change Much of the work around quality and safety in the UK healthcare system has been driven by recent enquiries into poor performing services, including the Francis Public Inquiry into the Mid Staffordshire NHS Foundation Trust. The government response to the Francis Report, Patients First and Foremost released in February this year highlights early identification of problems as one of five key focuses: Preventing problems Detecting problems quickly Taking action promptly Ensuring robust accountability Ensuring staff are trained and motivated Hospital mortality features strongly in the proposed actions relating to monitoring service performance. The recommendations however also highlight the challenges that have been experienced with hospital mortality reporting: ‘Mortality data must be interpreted with care, but it must also be accurate so that the public and patients can trust that they are hearing the truth. So there will be tough penalties and possibly criminal sanctions on organisations that are found to be massaging figures or concealing the truth about their performance. A statutory duty of candour will reinforce the existing contractual duty, so that patients can be assured that they are given the plain truth about the care that a hospital provides.’ The government response also acknowledges that there has been a confusing array of information about hospitals and the public cannot easily tell how well their local hospital is doing. In the future the Chief Inspector will ensure that there is a ‘single version of the truth’ about how their hospitals Melbourne EpiCentre 35 Hospital Mortality Indicator (HMI) Review are performing, not just on finance and targets, but on a single assessment that fully reflects what matters to patients. The Chief Inspector will make a balanced assessment of hospitals and give them a single, clear rating based on aggregated performance indicators to reflect the overall performance of health services. The result could be ‘outstanding’, ‘good’, ‘requiring improvement’ or ‘poor’. Outstanding hospitals will be given greater freedom from regulatory bureaucracy. The Friends and Family Test for both patients and staff will be a vital component of the rating. Everyone in the system, whether regulator or commissioner, will use the same single set of data to judge success. Importantly, information about hospitals will not be limited to aggregated ratings but it will be possible to drill down to information at a department, specialty, care group and condition-specific level. 2.2 Overview of the indicators and reporting approaches National Health Service, England Efforts to support monitoring and improvement within the English healthcare system have resulted in the development of over 1000 indicators. The indicators and related data are available via the Indicator Portal of the Health and Social Care Information Centre. This is England’s national source of health and social care information, that works with a wide range of health and social care providers nationwide to provide the facts and figures that help the NHS and social services run effectively. The HSCIC collect data, analyse it and convert it into useful information. This helps providers improve their services and supports academics, researchers, regulators and policy makers in their work, including: Comparing the demographic profile of their local area with other regions and national averages Understanding what the population health challenges are in their area and how they may be changing over time Exploring the diverse range of factors that influence health inequalities Building a picture of the current state of health inequalities in their area Inform annual health reviews and equity audits Providing measurements for service planning, performance management and other success criteria The Indicator Portal is accessible to the public. NHS staff can also view restricted data using an nww connection to the NHS network. There is also a full list of the indicators published on the portal available on the homepage online. In the spreadsheet each indicator has a unique indicator code that can be type into the search function on the website to go directly to any indicator. The indicators have been classified according to subject rather than type of indicator e.g. mortality / prevalence / incidence. This ensures that all indicators are classified in a consistent manner and a user browsing around a subject grouping will have access to other types of indicators outside of the Compendium. The portal enables keyword searching to support the user who is interested in specific attributes, for example indicator type. A user can therefore enter a search on 'mortality' and a list of all the mortality indicators will be returned, likewise a search on 'prevalence' would bring up all the prevalence indicators. Searches can be refined by adding key words e.g. ‘vaccination MMR’. Public reporting via the Indictors Portal is largely limited to excel spreadsheets (see example below). The reviewers have not had access to the NHS reporting portal and are therefore unable to comment on this aspect of the reporting. Melbourne EpiCentre 36 Hospital Mortality Indicator (HMI) Review Figure 16. Online public reporting of indicators via the Indicator Portal Figure 17. Indicators relevant to this review Compendium of Population Health Indicators Melbourne EpiCentre 37 Hospital Mortality Indicator (HMI) Review A wide-ranging collection of over 1,000 indicators designed to provide a comprehensive overview of population health at a national, regional and local level. Not all indicators can be viewed at every level. These indicators were previously available on the Clinical and Health Outcomes Knowledge Base website (also known as NCHOD). They cover the following areas of interest, including those relevant to this review, namely deaths, mortality, infant mortality and life expectancy. The indicators have not been significantly reviewed for over six years The public version of the website (online) is available for any user. The NHS version of the site is through an N3 connection: online Relevant to this review are the indicators categorised under Hospital care / outcomes /deaths Hospital care Admissions Procedures Outcomes Readmissions Discharges Deaths Deaths within 30 days of emergency admission to hospital: myocardial infarction: indirectly age, sex and diagnosis standardised rates, 35-74 years, annual trend, F Deaths within 30 days of emergency admission to hospital: myocardial infarction: indirectly age, sex and diagnosis standardised rates, 35-74 years, annual trend, M Deaths within 30 days of emergency admission to hospital: myocardial infarction: indirectly standardised rate, 35-74 years, annual trend, P Deaths within 30 days of emergency admission to hospital: fractured proximal femur: indirectly standardised rate, all ages, annual trend, F Deaths within 30 days of emergency admission to hospital: fractured proximal femur: indirectly standardised rate, all ages, annual trend, M Deaths within 30 days of emergency admission to hospital: fractured proximal femur: indirectly standardised rate, all ages, annual trend, P Deaths within 30 days of emergency admission to hospital: stroke: indirectly standardised rate, all ages, annual trend, F Deaths within 30 days of emergency admission to hospital: stroke: indirectly standardised rate, all ages, annual trend, M Deaths within 30 days of emergency admission to hospital: stroke: indirectly standardised rate, all ages, annual trend, NHS Quality Accounts Quality Accounts are annual publicly available reports about the quality of services provided by an NHS healthcare service. Links to the data for the thirteen mandatory indicators, based on recommendations by the National Quality Board, are set out to enable providers access to the latest data. They include the new mortality indicator SHMI which uses standard and transparent methodology for reporting mortality at hospital trust level across the NHS in England. The Quality Accounts are aligned under the domains of the new NHS Outcomes Framework. SHMI is positioned under Domain 1. Dr Foster Intelligence and other providers Melbourne EpiCentre 38 Hospital Mortality Indicator (HMI) Review In the UK, as in the United States, much of the healthcare information collation and reporting is conducted by private organisations that engage directly with health services and provide a range of related services to support quality improvement. Dr Foster is one such organisation, which considers itself to be the leading provider of healthcare information and benchmarking in England, developing and implementing performance metrics to facilitate positive change in healthcare service delivery. Dr Foster’s annual Hospital Guide publishes data for trusts across England. Now in its eleventh year, the Hospital Guide publishes an independent and authoritative analysis of the variations that exist in acute hospital care in a way that aims to be meaningful for clinicians and managers and understandable to patients and the public. The 2001 Hospital Guide was the first time that information was published on how well or badly a hospital was performing. Information is based on analysis of admissions data (by the Dr Foster Unit at Imperial College London) and a Department of Health-approved questionnaire which 99 per cent of NHS trusts completed in 2012. The 2012 Dr Foster Hospital Guide containing performance data on every hospital trust in England. In addition to publishing traditional mortality indicators, the Guide highlights information on the link between quality and efficiency and rates hospital accordingly (Figure 18). The 2012 Guide includes the four mortality indictors Hospital Standardised Mortality Ratio (HSMR), death in low mortality DRGs, Summary Hospital level Mortality Indicator (SHMI) and deaths after surgery. The Hospital Guide 2013 will be focusing on mortality in an effort to focus acute trusts’ on this outcome measure. In addition to the usual measures of mortality (HSMR, SHMI, deaths after surgery and deaths in low mortality DRGs), the intention is to scrutinise some of the constituent parts of the HSMR including: palliative care coding rate, case mix index, doctors per bed, SMRs for AMI, pneumonia, stroke, heart failure and broken hips, and mortality at weekends. Dr Foster Quality Account reports provide online reports for participating health services. Mortality indicators, including in-hospital mortality indicators for AMI, stroke and fractured neck of femur, are included under the domain of Patient Safety. Comparisons with other trusts are indicated by a colour coded rating system – green for ‘exceeded expected’ performance, orange for ‘in line with expected’ performance and green for ‘below expected’ performance. The results are expressed as a ratio of actual deaths to expected deaths. These mortality indicators use a control limit (displayed on the graph as a white line), which is set at 99.8%. Data points 'falling within the control limits are said to display 'common-cause variation', which means it may be due to chance. Data points falling outside the control limits are known as 'outliers' and chance is an unlikely explanation. They are said to display 'special-cause variation' that is, factors other than chance are the cause. In addition to the ratios for the individual indicators, the trusts are given a composite score summarising performance across the 13 patient safety indicators (Patient Safety Summary Score). These score are out of 100 and reported across five bands of performance. Detailed analysis and timely access to data is provided through the Real Time Monitoring web-based service which enables providers to measure, compare and understand their performance, monitor activity using statistical process control charts (CUSUM) and drill down to patient-level data for detailed investigations. RTM highlights areas of high or low performance and provides: The ability to monitor the quality of patient outcomes, around key indicators such as mortality (HSMRs, ‘deaths after surgery’ and deaths in high and low-risk conditions), length of stay, day case rates and 28-day readmissions. Intelligent analysis from an automated report function that highlights potential clinical issues as soon as they occur, e.g. changes in patient outcomes, case-mixes or coding quality. Melbourne EpiCentre 39 Hospital Mortality Indicator (HMI) Review Statistical alerts using both CUSUM and relative risk to highlight areas of concern. Automatic letters and CUSUM charts that trigger an alert to the medical director and Care Quality Commission when there is significant divergence in expected clinical outcomes. Participating trusts can: Obtain both a trust-wide view and the ability to drill down to specialty and patient level, to quickly identify areas of concern and inform clinical audit. Benchmark performance against comparable providers nationally, regionally and at a peer group level. Externally assess standardised mortality rates, a key recommendation of the Francis report. Manage the dual demands of quality and efficiency through a range of metrics, including length of stay, day case rates and readmissions. Identify avoidable readmissions – for patients readmitted to your hospital or any other hospital in England. Inform performance improvement programs, including HSMR reduction, Enhanced Recovery and care bundles. Further information is included on the website and the RTM factsheet. CHKS is another major provide in the UK and internationally, and one that has been vocal in its views of mortality indicators. It does not provide public reporting of healthcare data but has developed its own indicator Risk Adjusted Mortality Indicator (RAMI). Details of the specifications for this indicator were not available via the internet. Melbourne EpiCentre 40 Hospital Mortality Indicator (HMI) Review Figure 18. Dr Foster, The Hospital Guide 2012, public reporting of efficiency and mortality measures Source: Dr Foster Hospital Guide 2012 Melbourne EpiCentre 41 Hospital Mortality Indicator (HMI) Review Figure 19. Dr Foster Quality Account, Hospital reports, public access Source: Doctor Foster Quality Account website Melbourne EpiCentre 42 Hospital Mortality Indicator (HMI) Review NHS Scotland The Quality Indicators service area of Information Services Division (ISD) Scotland aims to improve healthcare by operating as the centre of expertise in the development of quality improvement, efficiency and effectiveness indicators, hospital and benchmarking measures for NHS Services and to provide information in support of decision making for service improvement. The website provides a summary of progress against each of the Quality Outcome Indicators (see Figure 20 below). Work is underway to provide a meaningful categorisation of the current progress of each indicator, which will reflect the views of the Quality Alliance Board. Various other disease-specific reports are available as well as detailed SHMR reports as described in the next section. Figure 20. Indicator progress as it appears on the ISD website INDICATOR PROGRESS Care Experience The latest value of the indicator is 79.5 in 2012, which is a statistically significant decrease of 0.7 compared to the 2011 value (the indicator is a score between 0 and 100, with higher scores representing a better experience. The score is based on survey questions and does not represent a percentage). Emergency Admissions In 2011/12 the rate of emergency admissions was 10,070 emergency admissions per 100,000 population. Since 2008/09, this rate has remained level at around 10,000 emergency admissions per 100,000 population. In 2011/12 the emergency admission bed day rate was 73,550 emergency bed days per 100,000 population. Since 2008/09 the rate has shown a steady reduction. End of Life Care The proportion of the last six months of life spent at home or in a community setting was 90.7% in the year ending March 2011. This figure has remained at just over 90% for the 5 years to 2010/11. Healthcare In 2011 the prevalence of HAI was 4.9% in acute hospitals, a significant reduction Associated Infection since the last survey, at 9.5% in 2005/06. (HAI) Healthy Birthweight Hospital Standardised Mortality Ratios (HSMR) The percentage of babies born at a healthy birthweight was 90.1% in the year ending March 2011. This figure has remained relatively stable over the last ten years. HSMR at Scotland-level has decreased by 11.8% between the quarter October to December 2007 and the latest quarter (October to December 2012). Premature Mortality In 2011 the death rate among those aged under 75 was 349 per 100,000 population of Scotland, with the figure decreasing by 22% over the last 10 years. Self-assessed General Health In 2011, 75.8% of adults described their health in general as either 'good' or 'very good'. Since 2008, this level has fluctuated between 75.0% and 76.7%. Melbourne EpiCentre 43 Hospital Mortality Indicator (HMI) Review 2.3 Mortality indicators Mortality measurement has received much attention within the United Kingdom over the past few years and there has been wide debate about the usefulness of mortality ratios, as evidenced by comments in the response to the Francis Report and various other commentaries. There is more than one measure routinely produced and used in the UK for the measurement of hospital mortality. The HSMR (Hospital Standardised Mortality Ratio) is the indicator developed by Imperial College and which is routinely produced by Dr Foster Intelligence. This was a first for the UK in terms of national coverage and has informed the Scottish model and other models internationally. More recently other alternatives have emerged, most notably the Summary Hospital-level Mortality Indicator (SHMI) in England / Wales (Department of Health / NHS Information Centre) and the Risk Adjusted Mortality Index (RAMI) produced by CHKS Ltd. Introduction of the SHMI in England followed a review in 2010 to address concerns raised in the first Francis Report that the different versions of mortality indicators and other assessments of the quality of care in use could produce different results, which had inevitably resulted in some confusion across the NHS and the public at large. A Steering Group was formed to look at agreeing a single methodology for a mortality indicator for adoption across the NHS in England, and to offer some guidelines about its use. The groups work encompassed: the definition of the indicator, its purpose and audience, and any limitations for its use; the technical construction of the indicator – transparency of methodology the factors which affect the indicator’s construction and outputs – risk adjustment variables, service configurations, operational behaviours, recording and coding practices, and data quality; the arrangements that are needed for the appropriate presentation, publication and practical dissemination of the indicator’s results; The needs of different audiences in the relation to the interpretation, construction, use and context for its use. The recommendations and detailed considerations about the use of the SHMI are contained in the detailed report (Report from the Steering Group for the National Review of the Hospital Standardised Mortality Ratio July 2010). Amongst these recommendations, and echoed by the NHS Scotland was the view that: ‘A summary hospital-level mortality indicator is one of a number of indicators which can provide important information about a hospital and its quality and, in some circumstances, help shine a light on potential areas for further analysis or investigation. As a high level measure, it is a helpful indicator to have in the portfolio of screening and surveillance indicators and may help flag potential problems, but only if used in conjunction with and corroborated by other information.’ Scotland also has its own HSMR indicator, which measures mortality 30 days from admission. Melbourne EpiCentre 44 Hospital Mortality Indicator (HMI) Review Dr Foster HSMR In 2001 Dr Foster published the first Hospital Guide, which included the first national publication of standardised hospital death rates in the world. Over 70 per cent of NHS acute trusts use HSMR analysis to monitor clinical outcomes in their hospitals via Dr Foster’s Real Time Monitoring tool (RTM). The methodology for the HSMR is continually refined. The current definitions and methodology are described in detail in the document Understanding HSMR. In terms of adjustment for casemix, risks take into account those patient characteristics that are most strongly correlated with death and which reflect the patient’s risk profile rather than the way in which the hospital has treated them. These factors are: Sex Age on admission (in five year bands up to 90+) Interactions between age on admission (in five year bands up to 90+) and Charlson comorbidity score Admission method (non-elective or elective) Socio-economic deprivation quintile of the area of residence of the patient (based on the Carstairs Index) Diagnosis/procedure subgroup Co-morbidities (based on Charlson score) Number of previous emergency admissions Year of discharge (financial year) Whether or not Palliative care Month of admission The method also adjusts for the presence of palliative care episodes by including it in the risk adjustment model. If any episode in the superspell has treatment function code 315 or contains Z515 in any of the diagnosis fields, then it is defined as ‘Palliative’, all others are termed ‘Non-palliative’. The methodology uses the Charlson score to adjust for comorbidities. The original Charlson weights have been updated and calibrated on English data due to differences in coding practice and hospital patient population characteristics. As a result in 2011: Dr Foster expanded the coding definition of some conditions such that more patients are identified as having those conditions. Only secondary diagnoses (DIAG2-DIAG14) are now considered. There is now greater variation in weights between conditions and the Charlson index (the sum of the weights) is treated as a continuous variable (limited to the range 0-50) for the purposes of risk adjustment. Melbourne EpiCentre 45 Hospital Mortality Indicator (HMI) Review Figure 21. Charlson weights, Dr Foster 2011 Source: Understanding HSMR. A toolkit on Standardises Hospital Mortality Ratio In the most recent report, Dr Foster has made four methodological upgrades to the Hospital Standardised Mortality Ratio (HSMR) risk models to improve case-mix adjustment and make the methodology more statistically robust. These are included in the most recent update Understanding HSMRs March 2012. The four changes are: Only use data from 2000/2001 onwards: Dr Foster has 15 years of data, and the early years are of lesser quality, and perhaps reflect different patterns of care. Thus data from 2000/01 onwards is only being used. Remove ethnicity from the case-mix model: Although coding is improving, ethnicity is variably recorded across trusts. In some cases, up to 50% of admissions record ethnicity as unknown and this unequal recording may be very slightly biasing some of our indicators. It has therefore been removed from the case-mix model (although health services will still be able to analyse by ethnicity in, Real Time Monitoring). Improved Charlson weightings for interaction: The effect of comorbidity differs by age. Dr Foster is taking this into account in the case-mix model. Better adjustment for age: Where previously the HSMR model required at least 20 deaths per age group, Dr Foster now only need 10 deaths per group. This better adjustment for age will give us a better prediction of death for each patient. Comparing the UK indictors - Dr Foster HSMR, the new SHMI indicator and Scottish HSMR There are a number of key differences between the new NHS SHMI and the Dr Foster HSMR: Reporting - the Dr Foster HSMR is reported as a standardised ratio with a baseline of 100, the SHMI has a baseline of 1 Patient populations – SHMI includes deaths outside acute hospitals (including those occurring 30 days after discharge) while Dr Foster HSMR only includes inpatient deaths. The Technical Group advising regarding the SHMI considered the 30 day timeframe provides a Melbourne EpiCentre 46 Hospital Mortality Indicator (HMI) Review more complete picture of hospital mortality rates, maintaining that hospitals should be interested in what happens to their patients in the period immediately after discharge. Proportion of in-hospital deaths – SHMI includes all in-hospital deaths while Dr Foster HSMR includes diagnoses representing about 80% of deaths The factors adjusted for vary between the two indicators – e.g. o Dr Foster adjusts for palliative care. o In comparison to Dr Foster, the SHMI adjusts for fewer factors (refer Figure 22). Re-calibration: the SHMI is currently recalibrated and rebased quarterly, at every publication, while Dr Foster is recalibrated annually. This means that the England average figures which drive the expected figures are updated at every quarter. Any improvements or otherwise to a SHMI value for a trust compared to the previous publication will be relative to the England average at the point of publication. Therefore, if the overall England average has improved and the performance of a trust has also improved around the same scale, their SHMI value would show little, if any, change. The SHMI is reported with a number of companion indicators to help support understanding and interpretation – these are still being developed but include crude mortality, SHMI for all admissions, and SHMI for emergency admissions. ‘Drill down’ indicators are also being considered including specific SHMI for selected diagnoses and procedures. In additional, a range of contextual indicators are being considered to further inform interpretation. In terms of correlation between the Dr Foster HSMR and the SHMI, this is captured briefly in the report An Evaluation of Standardised Hospital-level Mortality Indicator conducted by the School of Health and Related Research, Sheffield University in 2011. A comparison of data from a number of Trusts from 2009/10 found reasonable correlation (r=0.753) between the HSMR and SHMI and no obvious outliers. It was however noted that the highest and lowest SMHI are not the highest and lowest HSMR. This was also found to be the case in the data reported by Dr Foster (Figure 23). Figure 22. Scatter plot of SMHI and Dr Foster’s HSMR (selected Trusts 2009/10) Source: An Evaluation of Standardised Hospital-level Mortality Indicator, School of Health and Related Research, Sheffield University in 2011 Melbourne EpiCentre 47 Hospital Mortality Indicator (HMI) Review Figure 23. Overview of mortality measures, Dr Foster Hospital Guide 2012 Source: Dr Foster Hospital Guide 2012 Dr Foster has also developed a tool called the Mortality Comparator which is available to their clients, enabling them to investigate and understand the Summary Hospital-level Mortality Indicator (SHMI) and provide a detailed comparison between the HSMR and SHMI, at a trust level, and compared with national and chosen peers, to provide insights down to individual diagnosis group level. East Kent Hospitals NHS Foundation Trust has undertaken a root cause analysis of its SHMI using Mortality Comparator. There are also differences between Scottish HSMR and the English indicators. Figure 24 summarises the key differences between the general mortality measures used in the UK (as taken from the recent quarterly NHS Scotland report .For example, the Scottish indictor includes mortality within 30 days of admission (30-day post-admission mortality) which is therefore not dependent on length of stay. A useful comparison of the SHMI with the CHKS Risk Adjusted Mortality Ratio (RAMI) has also been undertaken by Paul Robinson at CHKS. Melbourne EpiCentre 48 Hospital Mortality Indicator (HMI) Review Figure 24. Comparisons between the favoured hospital mortality measures in England, Scotland and Wales Source: NHS Scotland Hospital Standardised Mortality Ratios, Quarterly HSMR release 28 May 2013 Limitations of the models Limitations of the models are discussed in various documents in relation to the English and Scottish models. In Scotland, a number of caveats are considered and addressed in relation to whether the HSMR is a good indicator of quality. For example, the statistical model used to produce the HSMR does not take account of palliative care, and so changes over time in palliative care services could be expected to impact on the HSMR. In addition, the current model looks at deaths within 30 days of admission to hospital, which means that in-hospital deaths are not captured if the patient is in hospital for more than 30 days. Further work currently underway also aims to assess whether the measure would be more robust if it is based on the final diagnosis attributed to each patient rather than the first, and whether a degree of re-categorisation of diagnosis might be more appropriate. Melbourne EpiCentre 49 Hospital Mortality Indicator (HMI) Review In England, it is noted that the models used to predict the expected number of deaths for the SHMI calculation are built on fewer risk adjustment variables than the proposed variables as provided by the Technical Review Group in their report. Using more risk adjustment variables may improve the predictive power of the models but at the same time could introduce more data quality issues. One of the main issues was that all the proposed risk adjustment variables were highly correlated and using only age, Charlson comorbidity index, admission method and gender provided a simple and stable model as recommended by the School of Health and Related Research in their final evaluation report. The issue of the inclusion or exclusion of palliative care patients has also been ongoing. Following concerns raised by some hospital trusts that they were unfairly penalised under the current SHMI methodology of not excluding palliative care, an investigation was conducted and findings published in mid 2013 in The Use of Palliative Care Coding in the Summary Hospital-level Mortality Indicator. Coding was found to be a major issue with organisations interpreting palliative care in different ways, and some organisations showing wide variation in coding over the course of three publications of the data. It was noted that there is a difficulty in establishing a consistent definition of palliative care, and that there is a risk of gaming. The report also found that applying a model that adjusted for palliative care did not have a significant effect on the majority of trusts, and that the contextual indicators reported with the SHMI supported understanding of the casemix. The example press statement below highlights the issue as experienced by a local trust: Figure 25. Statement on mortality rates – Royal Devon and Exeter Trust Statement on mortality rates –Royal Devon and Exeter Trust You may be aware of recent media coverage suggesting that this Trust has higher than expected death rates. I can reassure our patients and their families that this is not the case and that the RD&E continues to deliver safe, high quality health care. There are a number of different indicators for deaths in hospital and they all use slightly different data, which can be confusing and misleading. For this reason, the Department of Health introduced a standard benchmark to measure all deaths in hospital, called the ‘Summary Hospital-level Mortality Indicator’. Under this benchmark for the period covered by the recent Dr Foster Good Hospital Guide, (which is referred to in media articles in recent weeks) our figure was 0.88. This is better than expected and put us in the best-performing 16 trusts in the country. The ‘Dr Foster ‘definition of a death in hospital excludes terminally ill patients who are receiving palliative care. Due to the way we coded these patients during the period in question, we attributed fewer than 1% of deaths to this group compared to the national average of 15.9%. Unlike most other organisations we did not change our coding at the time and this therefore skews the figures, as we have been reporting a group of patient deaths that other hospitals have been excluding. We are currently changing our coding practice to a less stringent stance which will allow patients under hospice care to be assigned palliative care codes in the future. As a result, our ‘Dr Foster’ mortality rate will drop, though our preferred measure and that of the wider NHS is the Department of Health standard benchmark. Source: Royal Devon and Exeter Trust website Melbourne EpiCentre 50 Hospital Mortality Indicator (HMI) Review A range of issues were also raised by the Technical Committee as noted below: Figure 26. Key issues regarding the use of SHMI as identified by the Technical Group Key issues regarding the use of SHMI as identified by the Technical Group making recommendations about the indicator: Issue 1 There is no ‘gold standard’ or single indicator which can be deemed as having most power in discerning good or poor quality of care. Nor is there a ‘perfect’ method for deriving such an indicator. A summary hospital-level mortality indicator is one of a number of indicators which can provide important information about a hospital and its quality and, in some circumstances, help shine a light on potential areas for further analysis or investigation. Issue 2 The SHMI on its own does not have face validity when considering it as a direct measure of quality of care. Views of individual members of the Technical Group differ about the level of confidence that can be placed on a summary level mortality indicator as an indicator of quality. All members agree that although it is not possible to say with complete confidence that a hospital with a high SHMI has worse quality of care than a hospital with a low SHMI, nonetheless the SHMI can and should be used as a trigger to ask hard questions, as described in detail in section 6 of this report. A high SHMI may reflect problems with may be a reflection of local circumstances concerning configuration and delivery of services or issues relating to data recording, coding or quality. Alternatively it may be a reflection of a real underlying problem in the quality of care that the hospital is delivering to its patients – and thus warrant further investigation as described elsewhere in this report. Notwithstanding this complexity associated with the SHMI, such difficulties of interpretation are common to most, if not all indicators of quality, including those derived from routine sources. Because of this, specific proposals for the use of the SHMI, for different audiences, are included in the report. Issue 3 It is a complex indicator and is open to misunderstanding and misinterpretation. As with most indicators, its use for all audiences and purposes is subject to cautions and caveats. As a high level measure, it is a helpful indicator to have in the portfolio of screening and surveillance indicators and may help flag potential problems, but only if used in conjunction with and corroborated by other information. Whilst it may offer an indication that there may have been preventable deaths which would warrant investigation, a numeric extrapolation of preventable deaths in any individual hospital should always be avoided, as it is not a valid application of the method. Issue 4 Deaths following admission to hospital may fall into a number of distinct categories. They may be: inevitable due to the seriousness of the patient's condition at the time of admission; expected – for example if the patient is in hospital for end of life care; potentially avoidable and a result of poor quality of care in that institution; or not potentially preventable by the hospital and a result of aspects beyond the control of a hospital (for example if a patient dies in A & E before admission). Not all of these factors would be recorded in HES. It must be noted that summary measures of mortality such as the SHMI cannot not distinguish between these categories of death. Issue 5 Attributing deaths to specific Trusts is complex. As noted in paragraph 3.1, the Technical Group has agreed in principle that the new SHMI should be interpreted in the context of deaths in all settings after an admission to hospital, and should include some deaths occurring in a defined period after discharge from hospital in the measure itself. However, there are possible bias effects which may affect the SHMI. This may make interpretation difficult, and so requires some further examination through the statistical modelling. Issue 6 Data quality and coding issues have a major impact on the SHMI as a measure of both comparative performance across organisations and trends over time. The accuracy and completeness of coded clinical data about diagnoses and comorbidities will affect the validity of the SHMI of any individual hospital. The Steering Group therefore believes that the SHMI therefore requires Melbourne EpiCentre 51 Hospital Mortality Indicator (HMI) Review additional supporting information about or indicators on data quality and coding. Some of the methodological arguments (e.g. over adjustment for palliative care, depth of coding of comorbidities, and interpretation of the clinical heading of ‘impression’ applied by clinicians to diagnoses in source notes) could be resolved by simpler and more robust guidance on specific aspects of clinical coding which will help Trusts in their responsibilities associated with data quality, recording and coding practices. It is proposed that the handling of coding guidelines in future includes in the development phase an impact assessment of the changes that are being proposed, in order that the implications for the service and information systems can be assessed. Improving the level of and reducing variation in depth of coding and coding quality across the NHS will, of course, remain a long term challenge that goes beyond the remit of this Review, but which has a direct relevance to the outputs of this Review. Source: Report from the Steering Group for the National Review of the Hospital Standardised Mortality Ratio In terms of future considerations, Scotland identifies the following questions to be considered by their advisory group: Need for calibration of the model every quarter (i.e to consider whether they are satisfied of the robustness using increasingly historical coefficients particularly when point in time comparative analysis is used as part of the governance process) Appropriateness of existing diagnostic groups Appropriateness of indexing the record at point of admission (particularly in relation to selecting main diagnosis on records that are potentially sourced from short-stay admissions units) Latest situation with regards coding of palliative care and the secondary diagnoses for comorbidities Embedding the entire Hospital Scorecard into the escalation process. In England the process is also ongoing, with recent changes identified including: Increasing the robustness of the logistic regression models by using three years of past data to build the models rather than one year Inclusion of an additional field (YEAR_INDEX) to facilitate case-mix adjustment Updated Risk Modelling specifications to include logistic regression options for model convergence Removal of gender specific diagnosis groups as it is no longer required Exclusion of regular night attenders from the data set for consistency with excluding regular attenders Specify reference category as first category for all classification/case-mix variables for risk model Updated the SHMI banding requirements to report on one banding: 95% random effects model control limits for over-dispersion. Further commentary around the use and limitations of the SHMI indicator are provided ion the Kings Fund website, including a response to the Keogh report online Melbourne EpiCentre 52 Hospital Mortality Indicator (HMI) Review Reporting – how is that data used and communicated to hospitals and the public? Dr Foster As previously noted, Dr Foster provides timely data to participating trusts via the Real Time Monitoring web-based service. Examples of reports relating to mortality are shown in the links below. Figure 27. Example of mortality charts provided through Dr Foster Real Time Monitoring (click on picture to follow link) Mortality and coding HSMR Trends Funnel Plot Performance summary 4 Dr Foster usually displays each HSMR on a funnel plot. Data points falling within the control limits are consistent with random or chance variation and are said to display ‘common-cause variation’; for data points falling outside the control limits, chance is an unlikely explanation and hence they are said to display ‘special-cause variation’ that is, where performance diverges significantly from the national rate and the trust is classified as an outlier. In classifying HSMRs as ‘high’, ‘low’ or ‘within the expected range’, Dr Foster uses statistical banding to account for random chance and minimize false positives. They use 99.8 per cent control limits to determine whether an HSMR is high or low. This means that if an HSMR is outside the control limit there is only a small possibility (0.2 percent) that this is due to chance. Only hospitals that ‘pass’ this control limit test are grouped as high or low and all others are classed as within the expected range. Dr Foster Intelligence only publishes data using the 99.8 per cent control limit statistical test. In the Real Time Monitoring tool they use the ‘more liberal’ 95 per cent confidence interval banding. This is to give clinicians and managers an early warning of potential problems. The distinction between control limits and confidence intervals is important; although they are very similar in construction and the difference between the two is subtle. Control limits are used because they offer hypothesis tests whereas (strictly speaking) confidence intervals do not. Control limits come from the Poisson distribution. Melbourne EpiCentre 53 Hospital Mortality Indicator (HMI) Review Dr Foster writes to trusts when an alert occurs on cumulative sum charts, another kind of statistical process control chart for a variety of individual diagnosis and procedure groups. These charts are run each month, and alerts are considered with a probability of a false alarm less than 0.1% (this is a higher threshold than the default of 1% on the RTM tool) and other restrictions are also applied to exclude some diagnoses including cancer and vague symptoms and signs. They also exclude diagnostic procedures such as endoscopies and alerts with fewer than five deaths. Two senior academics examine each alert and decide whether the trust should be notified or not. They look more carefully at alerts from specialist trusts, to examine possible casemix reasons for an alert. The Care Quality Commission is notified of the alert when it is sent to the trust chief executive. These notifications are carried out in confidence and Dr Foster Intelligence is not party to which notifications are sent out. The HSMR must not be considered a punitive measure but rather as an indicator for organisations as a whole to monitor their mortality. HSMRs can be used to identify potential issues early, thereby giving the organisation an opportunity to make sustainable changes to their service delivery. To facilitate this Dr Foster suggests a pathway for instigating an investigation pathway when an outlier for HSMR is notified. The steps include: Checking coding - Has the trust submitted incorrect data or applied different data codes to other trusts across the UK? Poor depth of coding can also affect the HSMR, i.e. when there are no or few secondary codes. A trust can improve its coding by encouraging coders and clinicians to work more closely together (some organisations have coders attached to specific specialities) so they can better understand each others’ roles and limitations; they could encourage clinicians to use a Körner Medical Records (KMR) to determine the most appropriate primary diagnosis and procedure code ; they also need to ensure that staff inputting data entry such as DOB, Sex, Discharge dates etc are properly recorded on the PAS system understand the importance of the work they are doing and it impacts on the organisation. Casemix - Has something extraordinary happened within the time frame i.e. an abnormal run of severely ill patients in a short period of time? Is co-morbidity coding correct? Check the comorbidity coding to identify the true casemix of the patient. No or poor co-morbidity coding can affect the HSMR. Structure - Does the organisation and its surrounding healthcare partners work in a different way to other trusts across the country? Do they have different care pathways i.e. end of life care in the hospital or NHS funded hospices? Other structural differences such as no weekend discharges or nurse-led discharge teams should be considered too. Process -At this point start considering that there is a potential issue with quality of care. Where service delivery needs to be reviewed, issues can be identified after monitoring and investigating alerts. Information systems such as RTM can help with this. Individual or team - Very occasionally the investigation will lead you to an individual or team. Where there is a commonality of personnel involved or a particular team, nurse or department, see what extra support they need in order for them to deliver the best possible care. Presentations available on the Real Time Monitoring system also include the Performance Summary dashboard (Figure 27). Public reporting is described earlier in this case study. Melbourne EpiCentre 54 Hospital Mortality Indicator (HMI) Review SHMI England The SHMI is reported quarterly for all trusts via the Indicator Portal website where information is presented in excel spreadsheet form including bandings that indicate whether a trust’s SHMI is ‘as expected’ or otherwise. In addition there is a brief summary report providing an overview of performance for the system as well as Background Quality Report providing information about data sources, the statistical methods and supporting documentation. The indicator specifications are also included. The models used to derive the values are recalibrated on a quarterly basis in line with the publication. Pre-release access through the Indicator Previewer is available to trust medical directors at least 10 days prior to publication for quality assurance purposes. The data is still considered ‘experimental’ and is available for the period April 2010 to December 2012. Figure 28. SHMI data downloaded from the Indicator Portal, NHS England In the report leading to the establishment of SHMI as the main indicator for NHS England, the Steering Group recommended that presentations of the SHMI for the public should avoid: Portraying the SHMI as an isolated summary indication of quality of care for an organisation; Using SHMI scores to derive information about or compare performance across different hospitals (such as in league tables) in the absence of a consideration of other comparative measures of quality; Assessing the quality of care at specialty, team or procedure level; Giving misleading information about the quality of the source data. The use of clear caveats as advocated by the Technical Group, should explicitly identify problems with data quality The explanation on the website therefore includes the following explanation (Figure 29). Melbourne EpiCentre 55 Hospital Mortality Indicator (HMI) Review Figure 29. Explanation accompanying SHMI data on the NHS England website ‘The SHMI requires careful interpretation, and should not be taken in isolation as a headline figure of trust performance. It is best treated as a ‘smoke alarm'. The SHMI is an indication of whether individual trusts are conforming to the national baseline of hospital-related mortality. ………..The SHMI can be used locally by individual hospital trusts to assess and investigate their mortality related outcomes. Regulators and commissioning organisations can also use the SHMI to investigate outcomes for trusts under their jurisdiction. In all of these cases the SHMI should not be used in isolation but in conjunction with other indicators (SHMI contextual indicators and other mortality indicators) and information from other sources (patient feedback, staff surveys and other such material) that together form a holistic view of trust outcomes and a fuller overview of how trust processes are impacting on outcomes. While the public and patients will be interested in the SHMI, is not intended primarily for use by patients or the public as it has not been specifically tailored for a public audience.’ Since SHMI includes all patients, including palliative care patients, two contextual indicators relating to palliative care are also published: the percentage of all admitted patients who are coded as receiving palliative care and the percentage of all patient mortalities coded as receiving palliative care. Additionally, five further contextual indicators are included in the publication. Deaths within thirty days for elective admissions Deaths within thirty days for non-elective admissions (including admissions coded as missing) Deaths split by those occurring in hospital and those occurring outside hospital within 30 days of discharge Provider spells split by deprivation quintile Deaths split by deprivation quintile These are based on the same spell level data as the SHMI i.e. all non-specialist acute trusts. A process for requesting underlying data for SHMI is available. A supplementary report on persistent outliers is also available. This report covers the past five SHMI publications and includes tables of trusts who have been identified as persistent outliers, contextual indicators recalculated for persistent outliers and any known issues identified in the past five SHMI publications. Melbourne EpiCentre 56 Hospital Mortality Indicator (HMI) Review Figure 30. Indicator Portal Screen showing data selections in relation to SHMI – NHS England Figure 31. Example of chart included in quarterly summary report Melbourne EpiCentre 57 Hospital Mortality Indicator (HMI) Review The SHMI is also available for public viewing via the NHS Choices website (Figure 32). Key facts are published on the website for each data issue, for the April 2013 these included: 10 trusts had a SHMI value categorised as ‘higher than expected’ 18 trusts had a SHMI value categorised as ‘lower than expected’ 114 trusts had a SHMI value categorised as ‘as expected’ The percentage of patient admissions with palliative care coded at either diagnosis or specialty level is approximately 1.0 per cent The percentage of patient deaths with palliative care coded at either diagnosis or specialty level is approximately 18.9 per cent The percentage of elective admissions where a death occurs either in hospital or within thirty days (inclusive) of being discharged is approximately 0.6 per cent The percentage of non-elective admissions including admissions coded as ‘unknown’ where a death occurs either in hospital or within thirty days (inclusive) of being discharged is approximately 3.7 per cent The percentage of deaths split by those occurring in hospital and those occurring outside hospital within 30 days of discharge is approximately 73.5 per cent and 26.5 per cent respectively The percentage of finished provider spells falling under each deprivation quintile (where quintile 1 is the most deprived) is 23.1 per cent for quintile 1, 19.9 per cent for quintile 2, 17.8 per cent for quintile 3, 16.2 per cent for quintile 4 and 14.6 per cent for quintile 5. There is insufficient information to calculate the deprivation quintile for 8.4 per cent of finished provider spells. The percentage of deaths falling under each deprivation quintile (where quintile 1 is the most deprived) is 21.1 per cent for quintile 1, 20.5 per cent for quintile 2, 20.6 per cent for quintile 3, 19.4 per cent for quintile 4 and 17.0 per cent for quintile 5. There is insufficient information to calculate the deprivation quintile for 1.4 per cent of deaths. Figure 32. Examples of reporting on the NHS Choices website Melbourne EpiCentre 58 Hospital Mortality Indicator (HMI) Review Source: NHS Choices website As referred to in the government’s response to the Francis Inquiry, there is a heightened commitment to ensuring accountability for healthcare outcomes. In February 2013 the English Prime Minister directed the Medical Director Sir Bruce Keogh to undertake an investigation of 14 trusts. This includes five organisations that had been outliers for two years on the Summary Hospital-level Mortality Indicator (SHMI) and nine organisations that have been outliers for two years on the Hospital Standardised Mortality Ratio (HSMR). The purpose of the investigation is to assure patients, public and Parliament that these hospitals understand why they have a high mortality and have all the support they need to improve. This will be a thorough and rigorous process, involving patients, clinicians, regulators and local organisations. The report of the investigation was published in July 2013. The review identified a wide range of areas for improvement, however it also identified the complexity of using and interpreting aggregate measures of mortality, including HSMR and SHMI. ‘The fact that the use of these two different measures of mortality to determine which trusts to review generated two completely different lists of outlier trusts illustrates this point. However tempting it may be, it is clinically meaningless and academically reckless to use such statistical measures to quantify actual numbers of avoidable deaths. Robert Francis himself said, ‘it is in my view misleading and a potential misuse of the figures to extrapolate from them a conclusion that any particular number, or range of numbers of deaths were caused or contributed to by inadequate care’’ ‘The key message that has accompanied the SHMI is that no one single measure should be taken in isolation and should not be used to assess quality in totality. Good quality is about a whole series of measures and looking at any anomaly within the set of indicators. So care is required when measuring mortality, and even more care is needed when trying to explain what the various mortality measures mean to the public.’ As might be expected, public reporting of mortality indicators and the investigations that have flowed from this, are not without their public relations challenges for health trusts, with media releases such as the example overleaf being not uncommon. Melbourne EpiCentre 59 Hospital Mortality Indicator (HMI) Review Figure 33. Example of media release by NHS trust in response to mortality findings Review into hospital mortality indicators Buckinghamshire Trust Monday 11 February 2013 Buckinghamshire Healthcare NHS Trust takes patient safety very seriously and has welcomed today’s announcement that Sir Bruce Keogh, Medical Director of the NHS, has widened its review into hospital mortality indicators. Buckinghamshire Healthcare has recorded higher than average Hospital Standardised Mortality Ratio (HSMR) over the past two years and has focussed its efforts to understand why. Action plans have been put in place and we have seen an improvement in our mortality indicators year-on-year as a result. This has included our doctors reviewing deceased patient notes on a monthly basis and the establishment of a mortality task force, which looks at patient care, the patient experience and clinical coding. Lynne Swiatczak, Chief Nurse and Director of Patient Care Standards, said: ‘Our regular detailed reviews of patient case notes and mortality data has not identified any areas of concern with patient care, but our task force and Trust Board continues to look in-depth at this issue. As a provider of a wide range of services including acute care, five community hospitals, and a hospice, we have also been working with independent experts to understand how this may impact on our HSMR. We welcome the approach being taken by Sir Bruce’s review, in particular the additional support and assurance it will provide to, and build upon, our own work. We will ensure this review is given our full support.’ Buckinghamshire Healthcare has a good record for patient care and quality. It has one of the lowest infection rates in the country, with no cases of MRSA bacteraemia over the past 12 months. Its stroke service has been recently rated in the top 10% of trusts in the country. In the past two years we have seen improvements in our mortality rates, which were at 118 in 2009/10 but have fallen to 109 in 2011/12. Scotland There appears to be a greater emphasis, in the published data, on comparisons between an English trust’s SHMI and the national average compared to Scotland. The Scottish approach is to focus on individual hospital trends and the aim of achieving a 20% reduction by 2015. In Scotland mortality for each hospital is standardised to a fixed baseline period and individual patient risks therefore remain constant over time. Since the first release of quarterly HSMR statistics to NHS Boards across Scotland in December 2009, three key phases have been established to the Quarterly Process, which involves HSMR at its heart but also includes the wider context of other indicators of quality. 1. HSMR Management Information Tool 2. Official Statistics Publication of HSMR for Scotland 3. The Hospital Scorecard A key component of the initial production of the HSMR, involves the systematic review of the data by Information Services Division and Healthcare Improvement Scotland. This aims to identify potential patterns in the data and to initiate a dialogue with boards where appropriate. The first release of HSMR to boards in December 2009 was not followed by publication of the statistics. This gave boards time to gain a greater understanding of some of the implications of the fairly complex adjustments Melbourne EpiCentre 60 Hospital Mortality Indicator (HMI) Review that were applied in the model and to reconcile this with their own local data and set of abbreviated summary tables were published on a dedicated website and linked to the main ISD site. Since then, the format of release to boards has mirrored that of the management information tool, i.e. the same series of tables and charts for Scotland and each individual participating hospital. The publication has remained relatively unchanged until now. The May 2013 report has been much expanded to include more substantial commentary and context, including a look at stratified patterns of mortality at Scotland level and longer-term trends. There is also more commentary on the evolution of the measure in Scotland; where it came from, where we are now and where it is headed. The report also includes a more comprehensive look at how the Scottish HSMR compares to similar measures in other parts of the UK. The Hospital Scorecard is a product commissioned by the Scottish Government’s Directorate for Health Workforce & Performance and is now established as a routine part of the quarterly HSMR cycle. The scorecard is a management information product that incorporates HSMR with a series of other indicators, some of which are already routinely published. The other indicators cover readmissions, length of stay, hospital acquired infection rates, A&E waiting times and patient experience. The purpose of the scorecard is to provide an overview with different indicators synchronised to a common point in time. A major benefit of using a scorecard approach is that it addresses concerns raised about governance processes based on the review of HSMR alone. Throughout the quarterly cycle, interaction with boards is key and openly encouraged. Dialogue with the majority of NHS boards has been extensive since HSMRs were first released. This is in the form of much bespoke dialogue or more formally via the SPSP learning sessions or Quality HUB, or through the formal escalation process described above. To help users within the NHS better understand their data, and to encourage their sense of ownership of the information a supportive infrastructure has been put in place, in partnership between ISD and HIS offering: Sub-group analysis Individual audit of case-listing Assistance in interpretation of the national statistics and local intelligence Figure 34. Example of data available on the Scottish website Melbourne EpiCentre 61 Hospital Mortality Indicator (HMI) Review Figure 35. Examples of data included in the Quarterly report Source: Hospital Standardised Mortality Ratios, Quarterly HSMR release 28 May 2013 Melbourne EpiCentre 62 Hospital Mortality Indicator (HMI) Review The Scottish quarterly report also includes key findings; for the May 2013 report these were as follows: The Scottish Patient Safety Programme (SPSP) was established with the overall aim of reducing hospital mortality by 15% by December 2012. This was then extended to a 20% reduction by December 2015. HSMRs are calculated when crude mortality data are adjusted to take account of some of the factors known to affect the underlying risk of death. HSMR at Scotland-level has decreased by 11.8% between October-December 2007 and October-December 2012 Rolling annual HSMR’s show that there was a sustained reduction in hospital mortality between 2009 and 2011; the level thereafter has remained relatively constant. Hospital mortality has fallen for all types of admission; non-elective medical patients consistently account for the majority of deaths within 30-days of admission. Patients from the least deprived areas of Scotland consistently have lower levels of crude 30day mortality than patients from more deprived areas. Twenty eight (90%) of the thirty one hospitals participating in the SPSP have shown a reduction in HSMR since October-December 2007 (end of the baseline period); eight of those had a reduction in excess of 15%. Overall hospital mortality at Scotland level had been falling prior to the baseline period. HSMRs in Scotland are not directly comparable to similar measures adopted elsewhere in the United Kingdom. A high or higher than expected HSMR should be a trigger for further investigation, as in isolation it cannot be taken to imply a poorly performing hospital or poor quality of care. The future development of HSMR in Scotland shall be driven by the extended aim of a 20% reduction by December 2015 and implementation of the Quality Strategy – Quality Measures Framework. More information SHMI Methodology England Report from the Steering Group for the National Review of the Hospital Standardised Mortality Ratio July 2010 Indicator Specification: Summary Hospital-level Mortality Indicator methodology Indicator Specification: Percentage of admissions with palliative care coding Indicator Specification: Percentage of deaths with palliative care coding Indicator Specification: Deaths within 30 days for elective admissions Indicator Specification: Deaths within 30 days for non-elective admissions Indicator Specification: Deaths split by those occurring in hospital and those occurring outside hospital within 30 days of discharge Indicator Specification: Provider spells split by deprivation quintile Indicator Specification: Deaths split by deprivation quintile SHMI Specification issues log Amended AHRQ CCS ICD-10 lookup table May13 Process for requesting underlying data for SHMI Access the latest SHMI publication Melbourne EpiCentre 63 Hospital Mortality Indicator (HMI) Review Provider Spells Methodology (Opens in a new window) HES Mortality Data Guide Palliative Care Coding Report SHMI FAQs SHMR Methodology Dr Foster Dr Foster Intelligence Understanding HSMRs – A toolkit on Hospital Standardisesd Mortality Ratios SHMR NHS Scotland NHS Scotland’s most recent quarterly report Hospital Standardised Mortality Ratios, Quarterly HSMR release 28 May 2013 Other references The Association of Public Health Observatories’(now part of Public Health England) Dying to know – How to interpret and investigate hospital mortality measures CHKSs’ Mortality Measurement – six factors to consider in interpreting hospital indicator November 2011 CHKSs Hospital Standardised Mortality Ratios – their use a s a quality measure March 2010 Melbourne EpiCentre 64