Gary Sutton Improvement Advisor What is Quality Improvement? How to start? What is Quality Improvement? “Quality Improvement is a broad range of activities of varying degrees of complexity and methodological and statistical rigor through which providers develop, implement and assess small-scale interventions and identify those that work well and implement them more broadly in order to improve clinical practice.” Mary Ann Bailey, The Hastings Center The first law of improvement Every system is perfectly designed to achieve exactly the results it gets. Peter Senge The Fifth Dimension Evidence based knowledge Evidence based delivery 17 years to get 14% of evidence into practice No model is perfect, some are useful. Our change theory • A clear and stretch goal • A method • Predictive, iterative testing Aim Measures Changes Testing The Improvement Guide, API Aim • • • • • Aligned Timed Numeric Unachievable (by hard work alone) Non-negotiable (once set) Aim Statements Outcomes, Process, Relative or Absolute? • To reduce the number of children who are looked after at home, by 10% by end-2015. • 90% of children who are looked after at home will not end up becoming looked after away from home, by end2015 • Social workers will have fortnightly contact lasting at least 1 hour with every child looked after at home, by end-2014 • A permanence decision should have been made within 6 months for all children who are looked after continuously for at least 6 months, by end-2015 Measures The Improvement Guide, API Why are you measuring? Improvement? The answer to this question will guide your entire quality measurement journey! Improvement Accountability Research Improvement of processes/systems (efficiency & effectiveness) Comparison, choice, reassurance, motivation for change New knowledge (efficacy) Test observable No test, evaluate current performance Test blinded or controlled Accept consistent bias Measure and adjust to reduce bias Design to eliminate bias • Sample Size “Just enough” data, small sequential samples Obtain 100% of available, relevant data “Just in case” data • Flexibility of Hypothesis Flexible hypotheses, changes as learning takes place No hypothesis Fixed hypothesis (null hypothesis) • Testing Strategy Sequential tests No tests One large test • Determining if a change is an improvement Run charts or Shewhart control charts (statistical process control) No change focus (maybe compute a percent change or rank order the results) Hypothesis, statistical tests (t-test, F-test, chi square), p-values • Confidentiality of the data Data used only by those involved with improvement Data available for public consumption and review Research subjects’ identities protected Aspect Aim Methods: • Test Observability • Bias Why Time Is Important for Measurement • Aggregate measures alone do not lead to predictions about future performance or insights to explain past variations • Displaying data over time (using run charts or control charts) allows us to make informed predictions, and thus make changes to create different results “When you have two data points, it is very likely that one will be different from the other.” W. Edwards Deming Cycle Time (min.) 80 70 60 50 40 30 20 10 0 70 35 Avg Before Change Avg After Change Scenario 1 100 90 80 70 60 50 40 30 20 10 0 Scenario 3 R Lloyd, Institute for Healthcare Improvement Dec Nov Oct Sep Aug Jul Jun May Apr Mar Feb Jan Change Made date Cycle Time (min.) Scenario Unit 2 2 Dec Nov Oct Sep Aug Jul Jun May Apr Mar Feb Cycle time results for units 1, 2 and 3 Jan Change Made date Cycle Time (min.) 100 90 80 70 60 50 40 30 20 10 0 Minimum Standard for Monthly Reporting in the Collaborative: Annotated Time Series Cycle Time in Office 60 Huddles tried Nurses start early Lab Changes 50 Minutes Patient moved into rooms ASAP 40 30 20 6/12 Goal 7/12 8/11 9/10 10/10 11/9 Fundamental Questions for Measurement 1. How can we monitor the real-time behaviour of the system, steer it to avoid crashes, and maintain it’s operational reliability? 2. Over time, where are the gaps in practice that indicate a need for system change (i.e. improvement)? 3. In our efforts to improve, what’s working? What changes are improvements? Are we on track to meet our aims? Seek Usefulness Not Perfection Measurement Guidelines • The key measures should clarify the aim and make it tangible • Keep it simple: Be careful about overdoing process measures • Use a balanced set of measures: process, outcome and balancing measures Outcome, Process, Balancing Measures • Outcome - Voice of the customer. Direct link to AIM • Process - Voice of the workings of the system. What we work on to get to aim • Balancing - If we push on one thing will something else go wrong? How will we know that a change is an improvement? 1. By understanding the variation that lives within your data 2. By making good management decisions on this variation (i.e., don’t overreact to a special cause and don’t think that random movement of your data up and down is a signal of improvement). Changes The Improvement Guide, API Selecting Changes • Copy: use the literature, experience of others, hunches and theories • Be strategic: set priorities based on the aim, known problems, and feasibility • Avoid low impact changes • The Improvement Guide – Langley et al. Measuring for Improvement Change 1 A P S. Driver 1 S D Measure P. Driver Measure S. Driver 2 Aim: An improved system Measure Measure Change 2 A P S D A P Change 3 S D Testing S. Driver 3 Measure Measure P. Driver S. Driver 1 S. Driver 2 Outcome & Process Measures: • Denominator = total population • Assess the outcomes over a period of time (e.g. prior quarter, year) • Ultimate measures of overall system quality • ANOVA, Control Charts ‘Current Process’ Measures: • Denominator = clients seen in most recent measurement period (week, month) • Assess current efforts to improve processes & other drivers • P, U, XbarS Charts ‘PDSA’ Measures • Focus on single clients & events • XMR charts to test for immediate process change • RCA for change ideas “What’s next? ” “Did it work?” “What will happen if we try something different?” “Let’s try it!” Move Quickly to Testing Changes • • • • • • Year Quarter Month Week Day Hour “What tests can we completed by next Tuesday?” Examples of PDSA Cycles Aim: Eliminate queues at airport security A P S D Cycle 5: Implement new process Cycle 4: Test with all passengers for 1 day A P S D Separate flows for people and bags will reduce delays at security stations Cycle 3:Test system with every 10th passenger Cycle 2: Test system with one passenger at all stations Cycle 1: Test system with one passenger at one security station Example of Testing Multiple Changes Aim: Eliminate queues at airport security Use separate flows for people and bags Match capacity & demand Use visual reminders Use self-scanners as pre-check Population Scope of Change System Targeted for Implementation (Defined by Aim) Single-unit prototype: segments Spread to Total System (Additional units, sites, organisations) As you move from pilot testing to implementation to spread, your population of interest will need to be adjusted. Healthcare processes Smaller Scale Tests: Oneness Conduct the next test in 1 area with 1 worker with 1 service user PDSA Feedback Checklist