Presentation - Lloyd - Institute for Healthcare Improvement

advertisement
Better Quality
Through Better Measurement
Asia Pacific Forum on
Quality Improvement in Health Care
Robert Lloyd, PhD, Mary Seddon, MD and
Richard Hamblin
Wednesday 19 September 2012
© 2008 R. C. Lloyd & Associates, R. Scoville and the IHI
Faculty
Robert Lloyd, PhD
Executive Director Performance Improvement
Institute for Healthcare Improvement
Boston, Massachusetts, USA
Mary Seddon, MD
Clinical Director
Centre for Quality Improvement, Ko Awatea
Counties Manukau DHB, Auckland, NZ
Richard Hamblin,
Director of Health Quality Evaluation
Health Quality & Safety Commission, Wellington, NZ
© 2008 Institute for Healthcare Improvement/R. Lloyd & R. Scoville
1
Discussion Topics
• Why are you measuring?
• The Quality Measurement Journey
• Understanding variation conceptually
• Understanding variation statistically
• Linking measurement to improvement
3
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Exercise
Quality Measurement Journey Self-Assessment
This self-assessment is designed to help quality facilitators gain a better understanding of where they personally
stand with respect to the milestones in the Quality Measurement Journey (QMJ). What would your reaction be
if you had to explain why using a run or control chart is preferable to computing only the mean, the standard
deviation or calculating a p-value? Can you construct a run chart or help a team decide which control is most
appropriate for their data?
You may not be asked to do all of the things listed below today or even next week. But, if you are facilitating a
QI team or advising a manager on how to evaluate a process improvement effort, sooner or later these
questions will be posed. How will you deal with them?
The place to start is to be honest with yourself and see how much you know about the QMJ. Once you have had
this period of self-reflection, you will be ready to develop a learning plan for self-improvement and
advancement.
Use the following Response Scale. Select the one response which best captures your opinion.
1 I could teach this topic to others!
2 I could do this by myself right now but would not want to teach it!
3 I could do this but I would have to study first!
4 I could do this with a little help from my friends!
5 I'm not sure I could do this!
Source: R. Lloyd, Quality Health Care: A Guide to
6 I'd have to call in an outside expert!
Developing and Using Indicators. Jones & Bartlett
Publishers, 2004: 301-304.
2
Quality Measurement Journey Self-Assessment
Source: R. Lloyd, Quality Health Care: A Guide to Developing and Using Indicators.
Jones & Bartlett Publishers, 2004: 301-304.
Response Scale
Measurement Topic or Skill
1
2
3
4
5
6
Moving a team from concepts to set of specific quantifiable measures
Building clear and unambiguous operational definitions
Developing data collection plans (including frequency and duration of data collection)
Helping a team figure out stratification strategies
Explain and design probability and nonprobability sampling options
Explain why plotting data over time is preferable to using aggregated data and summary
statistics
Describe the differences between common and special causes of variation
Construct and interpret run charts (including the run chart rules)
Decide which control chart is most appropriate for a particular measure
Construct and interpret control charts(including the control chart rules)
Link measurement efforts to PDSA cycles
Build measurement plans into implementation and spread activities
The Model for Improvement
When you
combine the 3
questions with
the…
PDSA cycle,
you get…
What are we trying to
Accomplish?
How will we know that a
change is an improvement?
Our focus today
What change can we make
that will result in
improvement?
Act
Plan
Study
Do
…the Model for
Improvement.
Source:
Langley, et al. The Improvement Guide, 1996.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
3
Measurement should help you
connect the dots!
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
The Model for Improvement
The three
questions
provide
the
strategy
What are we trying to
Accomplish?
How will we know that a
change is an improvement?
Our focus today
What change can we make
that will result in
improvement?
Act
Plan
Study
Do
The PDSA cycle
provides the
tactical approach
to work
Source:
Langley, et al. The Improvement Guide, 1996.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
4
How will we know that a
change is an improvement?
1. By understanding the variation that
lives within your data
2. By making good management
decisions on this variation (i.e., don’t
overreact to a special cause and don’t think that
random movement of your data up and down is a
signal of improvement).
9
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Why are you measuring?
Improvement?
The answer to this question will guide your entire quality
measurement journey!
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
5
“The Three Faces of Performance Measurement:
Improvement, Accountability and Research”
by
Lief Solberg, Gordon Mosser and Sharon McDonald
Journal on Quality Improvement vol. 23, no. 3, (March 1997), 135-147.
“We are increasingly realizing not only how critical
measurement is to the quality improvement we
seek but also how counterproductive it can be to
mix measurement for accountability or research
with measurement for improvement.”
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
The Three Faces of Performance Measurement
Aspect
Aim
Methods:
• Test Observability
Improvement
Accountability
Research
Improvement of care
(efficiency & effectiveness)
Comparison, choice,
reassurance, motivation for
change
New knowledge
(efficacy)
Test observable
No test, evaluate current
performance
Test blinded or controlled
Accept consistent bias
Measure and adjust to reduce
bias
Design to eliminate bias
• Sample Size
“Just enough” data, small
sequential samples
Obtain 100% of available,
relevant data
“Just in case” data
• Flexibility of
Hypothesis
Flexible hypotheses, changes
as learning takes place
No hypothesis
Fixed hypothesis
(null hypothesis)
• Testing Strategy
Sequential tests
No tests
One large test
• Determining if a
change is an
improvement
Run charts or Shewhart
control charts
(statistical process control)
No change focus
(maybe compute a percent
change or rank order the
results)
Hypothesis, statistical tests (ttest, F-test,
chi square),
p-values
• Confidentiality of
the data
Data used only by those
involved with improvement
Data available for public
consumption and review
Research subjects’ identities
protected
• Bias
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
6
“Health Care Economics and Quality”
by Robert Brook, et. al. Journal of the American Medical Association vol. 276, no. 6, (1996): 476-480.
Three approaches to research:
• Research for Efficacy
(experimental and quasi-experimental designs/clinical trials, p-values)
• Research for Efficiency
• Research for Effectiveness
Quality
Improvement
Research
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Tin Openers and Dials
• Concept from Carter and Klein (1992)
• Tin openers open up cans of worms
• Dials measure things to make judgements
• Most of the time you need to ask the right
questions before you try and get the right
answers
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
7
Graphic Displays of Data
You need to determine not only why you are measuring
but also determine how you are going to analyze the data
and then present it.
You do have choices!
All to often, however, health care professionals present
data in ways that only lead to judgment:
• Aggregated data and descriptive statistics
• Static displays of data (e.g., pie charts, bar charts)
• Performance against a target or goal
15
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Example of Data for Judgment
CMS/HQA Core Measures
Does this tabular display of data help us understand
the variation in these measures?
Are we getting better, getting worse or
demonstrating no change?
AMI
CHF
PN
SIP
State
Average
79
61
46
52
Q3 04
77
56
16
41
4Q 04
79
58
16
43
Q1 05
78
81
63
56
20
43
54
Q2 05
80
62
31
49
YTD 05
78
79
58
60
20
44
47
Legend
Better than or Equal to State Average
Worse than State Average
(Perfect Care Bundles – all aspects of a bundle must be met in order to receive credit)
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
8
Control Chart - p-chart
11552 - Vaginal Birth After Cesarean Section (VBAC) Percentage
These data points are all
common cause variation
0.6
0.5
Data for
Improvement
Proportion
0.4
0.3
0.2
0.1
Dec-02
Oct-02
Nov-02
Sep-02
Jul-02
Aug-02
Jun-02
Apr-02
May-02
Jan-02
Feb-02
Mar-02
Dec-01
Oct-01
Nov-01
Sep-01
Jul-01
Aug-01
Jun-01
Apr-01
May-01
Jan-01
Feb-01
Mar-01
0
0.6
Proportion
0.5
Data for Judgment
0.4
0.3
0.2
0.1
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
0
So, how do you view the Three Faces of
Performance Measurement?
As a…
Research
Judgment
Improvement
As…
Or,
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
9
Relating the Three Faces of
Performance Measurement to your work
The three faces of performance
measurement should not be seen as
mutually exclusive silos. This is not an
either/or situation.
All three areas must be understood as a
system. Individuals need to build skills
in all three areas.
Organizations need translators who and
be able to speak the language of each
approach.
The problem is that individuals identify
with one of the approaches and dismiss
the value of the other two.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Dialogue #1
Why are you measuring?
─ How much of your organization’s energy is
aimed at improvement, accountability and/or
research?
─ Does one form of performance measurement
dominate your journey?
─ Does your organization approach measurement
from an enumerative or an analytic perspective?
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
10
Do you have a plan to guide your quality
measurement journey?
“The greatest thing in the world is not so
much where we stand, as in what direction we
are going.” Oliver Wendell Holmes
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
The Quality Measurement Journey
AIM
(How good? By when?)
Concept
Measure
Operational Definitions
Data Collection Plan
Data Collection
Analysis
ACTION
Source: Lloyd, R. Quality Health Care. Jones and Bartlett Publishers, Inc., 2004: 62-64.
22
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
11
The Quality Measurement Journey
AIM – freedom from harm
Concept – reduce patient falls
Measure – IP falls rate (falls per 1000 patient days)
Operational Definitions - # falls/inpatient days
Data Collection Plan – monthly; no sampling; all IP units
Data Collection – unit submits data to RM; RM assembles
and send to QM for analysis
Tests of
Analysis – control chart
Change
23
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
The Quality Measurement Journey
AIM
(Why are you measuring?)
Concept
Measure
Operational Definitions
Data Collection Plan
Data Collection
Analysis
ACTION
24
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
12
Moving from a Concept to Measure
“Hmmmm…how do I move
from a concept
to an actual measure?
Every concept can have MANY measures.
Which one is most appropriate?
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Every concept can have many measures
Concept
Potential Measures
Hand Hygiene
Ounces of hand gel used each day
Ounces of gel used per staff
Percent of staff washing their hands
(before & after visiting a patient)
Medication Errors
Percent of errors
Number of errors
Medication error rate
VAPs
Percent of patients with a VAP
Number of VAPs in a month
The number of days without a VAP
26
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
13
Three Types of Measures
• Outcome Measures: Voice of the customer or patient. How
is the system performing? What is the result?
• Process Measures: Voice of the workings of the system.
Are the parts/steps in the system performing as planned?
• Balancing Measures: Looking at a system from different
directions/dimensions. What happened to the system as we
improved the outcome and process measures? (e.g.
unanticipated consequences, other factors influencing
outcome)
27
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Potential Set of Measures for
Improvement in the ED
Topic
Improve waiting time
and patient
satisfaction in
the ED
Outcome
Measures
Total Length of
Stay in the ED
Patient
Satisfaction
Scores
Process
Measures
Balancing
Measures
Time to registration
Volumes
Patient / staff
comments on flow
% Leaving without
being seen
% patient receiving
discharge materials
Staff satisfaction
Financials
Availability of
antibiotics
28
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
14
Balancing Measures: Looking at the
System from Different Dimensions
• Outcome (quality, time)
• Transaction (volume, no. of patients)
• Productivity (cycle time, efficiency, utilisation, flow,
capacity, demand)
• Financial (charges, staff hours, materials)
• Appropriateness (validity, usefulness)
• Patient satisfaction (surveys, customer complaints)
• Staff satisfaction
29
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
The Quality Measurement Journey
AIM
(Why are you measuring?)
Concept
Measure
Operational Definitions
Data Collection Plan
Data Collection
Analysis
ACTION
30
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
15
Operational Definitions
“Would you tell me, please, which way I ought
to go from here,” asked Alice?
“That depends a good deal on where you want
to get to,” said the Cat.
“I don’t much care where” - said Alice.
“Then it doesn’t matter which way you go,”
said the Cat.
From Alice in Wonderland, Brimax Books, London, 1990.
31
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
An Operational Definition...
… is a
description, in
quantifiable
terms, of what
to measure
and the steps
to follow to
measure it
consistently.
• It gives communicable
meaning to a concept
• Is clear and
unambiguous
• Specifies measurement
methods and equipment
• Identifies criteria
32
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
16
How do you define the following Concepts?
•
•
•
•
•
•
•
•
•
•
•
•
•
•
World Class Performance
Alcohol related admissions
Teenage pregnancy
Cancer waiting times
Health inequalities
Asthma admissions
Childhood obesity
Patient education
Health and wellbeing
Adding life to years and years to life
Children's palliative care
Safe services
Smoking cessation
Urgent care
•
•
•
•
•
•
•
•
•
•
•
•
•
Delayed discharges
End of life care
Falls (with/without injuries)
Childhood immunisations
Maternity services
Engagement with measurement for
improvement
Moving services closer to home
Breastfeeding
Ambulatory care
Access to health in deprived areas
Diagnostics in the community
Productive community services
Vascular inequalities
33
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Exercise: Developing a Set of Measures and
Operational Definitions
• Identify a project you are currently working on or plan to
address in the near future.
• Select 1-2 Process , 1-2 Outcome and 1 Balancing
measure and develop a clear operational definition for
each measure.
• Use the Measurement Plan and Operational
Definitions Worksheet s to record your work.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
17
Measurement Plan Worksheet
Type
Measure Name
(Process,
Outcome or
Balancing)
Operational Definition
1.
P
2.
P
3.
O
4.
O
5.
B
Source: Lloyd & Scoville 2010
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Operational Definition Worksheet
Team name: _____________________________________________________________________________
Date: __________________
Contact person: ____________________________________
WHAT PROCESS DID YOU SELECT?
WHAT SPECIFIC MEASURE DID YOU SELECT FOR THIS PROCESS?
OPERATIONAL DEFINITION
Define the specific components of this measure. Specify the numerator and denominator if it is a percent
or a rate. If it is an average, identify the calculation for deriving the average. Include any special equipment
needed to capture the data. If it is a score (such as a patient satisfaction score) describe how the score is
derived. When a measure reflects concepts such as accuracy, complete, timely, or an error, describe the
criteria to be used to determine “accuracy.”
Source: R. Lloyd. Quality Health Care: A Guide to Developing and Using Indicators. Jones and Bartlett, 2004.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
18
Operational Definition Worksheet (cont’d)
Source: R. Lloyd. Quality Health Care: A Guide to Developing and Using Indicators. Jones and Bartlett, 2004.
DATA COLLECTION PLAN
Who is responsible for actually collecting the data?
How often will the data be collected? (e.g., hourly, daily, weekly or monthly?)
What are the data sources (be specific)?
What is to be included or excluded (e.g., only inpatients are to be included in this measure or only stat lab
requests should be tracked).
How will these data be collected?
Manually ______ From a log ______ From an automated system
BASELINE MEASUREMENT
What is the actual baseline number? ______________________________________________
What time period was used to collect the baseline? ___________________________________
TARGET(S) OR GOAL(S) FOR THIS MEASURE
Do you have target(s) or goal(s) for this measure?
Yes ___ No ___
Specify the External target(s) or Goal(s) (specify the number, rate or volume, etc., as well as the source of the
target/goal.)
Specify the Internal target(s) or Goal(s) (specify the number, rate or volume, etc., as well as the source of the
target/goal.)
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Dashboard Worksheet
Name of team:_______________________________
Measure Name
Operational Definition
Data Source(s)
(Provide a specific name such as
medication error rate)
(Define the measure in very specific terms.
Provide the numerator and the denominator if a
percentage or rate. Indicate what is to be included
and excluded. Be as clear and unambiguous as
possible)
(Indicate the sources of the
data. These could include
medical records, logs,
surveys, etc.)
Date: _____________
Data
Collection:
•Schedule (daily, weekly,
monthly or quarterly)
•Method (automated systems,
manual, telephone, etc.)
Baseline
Goals
•Period
•Value
•Short term
•Long term
See Worksheet Packet
Source: R. Lloyd. Quality Health Care: A Guide to Developing and Using Indicators. Jones and Bartlett, 2004.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
19
Source: R. Lloyd. Quality Health Care: A Guide to Developing and Using Indicators. Jones and Bartlett, 2004.
NON-SPECIFIC CHEST PAIN PATHWAY MEASUREMENT PLAN
Measure Name
Operational Definition
Data Source(s)
(Provide a specific name
such as medication error
rate)
(Define the measure in very specific terms.
Provide the numerator and the denominator
if a percentage or rate. Indicate what is to
be included and excluded. Be as clear and
unambiguous as possible)
(Indicate the sources of the
data. These could include
medical records, logs,
surveys, etc.)
Percent of patients who
have MI or Unstable
Angina as diagnosis
Numerator =
Patients entered into the NSCP path
who have Acute MI or Unstable Angina
as the discharge diagnosis
1.Medical Records
2.Midas
3.Variance Tracking Form
Denominator =
All patients entered into the NSCP path
Number of patients who
are admitted to the
hospital or seen in an ED
due to chest pain within
one week of when we
discharged them
Operational Definition:
A patient that we saw in our ED reports
during the call-back interview that they
have been admitted or seen in an ED
(ours or some other ED) for chest pain
during the past week
All patients who have
been managed within the
NSCP protocol
throughout their hospital
stay
Data
Collection:
•Schedule (daily, weekly,
monthly or quarterly)
•Method (automated systems,
manual, telephone, etc.)
1.Discharge diagnosis will be
identified for all patients
entered into the NSCP
pathway
2.QA-URwill retrospectively
review charts of all patients
entered into the NSCP
pathway. Data will be entered
into MIDAS system
1.Currently
collecting
baseline data.
1.Patients will be contacted by
phone one week after
discharge
1.Currently
collecting
baseline data.
2.Call-back interview will be
the method
2.Baseline will
be completed
by end of 1st Q
2010
See Worksheet Packet
Total hospital costs per
one cardiac diagnosis
Numerator =
Total costs per quarter for hospital care
of NSCP pathway patients
Denominator =
Number of patients per quarter entered
into the NSCP pathway with a
discharge diagnosis of MI or Unstable
Angina
1.Finance
2.Chart Review
Baseline
•Period
•Value
Can be calculated every three
months from financial and
clinical data already being
collected
2.Baseline will
be completed
by end of 1st Q
2010
1.Calendar
year 2010
2.Will be
computed in
June 2010
Goals
•Short term
•Long term
Since this is
essentially a
descriptive
indicator of
process volume,
goals are not
appropriate.
Ultimately the goal
is to have no
patients admitted
or seen in the ED
within a week after
discharge. The
baseline will be
used to help
establish initial
goals.
The initial goal will
be to reduce the
baseline by
5%within the first
six months of
initiating the
project.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Now that you have selected and defined your
measures, it is time to head out, cast your net and
actually gather some data!
40
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
20
Key Data Collection Strategies
Stratification
• Separation & classification of
data according to
predetermined categories
• Designed to discover patterns in
the data
• For example, are there
differences by shift, time of day,
day of week, severity of
patients, age, gender or type of
procedure?
• Consider stratification BEFORE
you collect the data
41
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
What does a stratification problem look like?
Measure: running calorie total
There are two distinct
processes at work here!
Total Calories
3500
3000
2500
2000
1500
1000
500
3/
1/
20
08
3/
8/
20
08
3/
15
/2
00
8
3/
22
/2
00
8
3/
29
/2
00
8
4/
5/
20
08
4/
12
/2
00
8
4/
19
/2
00
8
4/
26
/2
00
8
5/
3/
20
08
5/
10
/2
00
8
5/
17
/2
00
8
5/
24
/2
00
8
5/
31
/2
00
8
6/
7/
20
08
6/
14
/2
00
8
6/
21
/2
00
8
6/
28
/2
00
8
7/
5/
20
08
7/
12
/2
00
8
7/
19
/2
00
8
7/
26
/2
00
8
8/
2/
20
08
8/
9/
20
08
8/
16
/2
00
8
8/
23
/2
00
8
8/
30
/2
00
8
9/
6/
20
08
9/
13
/2
00
8
9/
20
/2
00
8
0
1C-42
© Richard Scoville & I.H.I.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
21
The data should be divided into two
stratification levels
3500
3000
Total
Calories
2500
2000
1500
1000
500
3/
1/
20
08
3/
8/
20
08
3/
15
/2
00
8
3/
22
/2
00
8
3/
29
/2
00
8
4/
5/
20
08
4/
12
/2
00
8
4/
19
/2
00
8
4/
26
/2
00
8
5/
3/
20
08
5/
10
/2
00
8
5/
17
/2
00
8
5/
24
/2
00
8
5/
31
/2
00
8
6/
7/
20
08
6/
14
/2
00
8
6/
21
/2
00
8
6/
28
/2
00
8
7/
5/
20
08
7/
12
/2
00
8
7/
19
/2
00
8
7/
26
/2
00
8
8/
2/
20
08
8/
9/
20
08
8/
16
/2
00
8
8/
23
/2
00
8
8/
30
/2
00
8
9/
6/
20
08
9/
13
/2
00
8
9/
20
/2
00
8
0
Travel days
Home days
What factors might influence your process?
Track them in your data to provide insights
about variation and how to change the process!
© Richard Scoville & I.H.I.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Common Stratification Levels
•
•
•
•
•
•
•
•
•
Day of week
Shift
Severity of patients
Gender
Type of procedure
Payer class
Type of visit
Unit
Age
What
stratification
levels are
appropriate for
your data?
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
22
Sampling
When you can’t
gather data on the
entire population
due to time,
logistics or
resources, it is
time to consider
sampling.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
The Relationships Between a Sample and
the Population
Population
What would a “good”
sample look like?
Negative Outcome
Source: R. Lloyd
Positive Outcome
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
23
The Relationships Between a Sample and
the Population
A representative
sample
Population
A
Negative Outcome
Positive Outcome
A representative sample has the same shape and location as the total
population but have fewer observations (curve A).
Randomization is key – every element of the population has
the same chance of being selected.
Source: R. Lloyd
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Sampling Bias
Population
A positively
biased sample
A negatively
biased sample
C
Negative Outcome
A
B
Positive Outcome
A sample improperly pulled could result in a positive sampling bias
(curve B) or a negative sampling bias (curve C).
Estimates of the population based on a biased sample will be incorrect.
Source: R. Lloyd
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
24
Sampling Methods
Probability Sampling Methods
• Simple random sampling
• Stratified random sampling
• Stratified proportional random sampling
• Systematic sampling
Non-probability Sampling Methods
• Cluster sampling
• Convenience sampling
• Quota sampling
• Judgment sampling
Source: R. Lloyd. Quality Health Care: A Guide to Developing and Using Indicators. Jones and Bartlett Publishers, 2004.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Enumerative Approaches
Sampling Options
Simple Random Sampling
Population
Sample
Stratified proportional Random Sampling
Sample
Population
Medical
Surgical
OB
S S M
M
M
OB OB
Peds
P
M
S
Judgment Sampling
Jan
50
Feb
March
Analytic Approach
April
May
June
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
25
Judgment Sampling
Source: R. Lloyd. Quality Health Care: A Guide to Developing and Using Indicators.
Jones and Bartlett Publishers, 2004, 75-94.
Especially useful for PDSA testing. Someone with
process knowledge selects items to be sampled.
Characteristics of a Judgment Sample:
• Include a wide range of conditions
• Selection criteria may change as
understanding increases
• Successive small samples instead
of one large sample
51
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Judgment Sampling in Action!
Well the night shift
is totally different
from the day shift!
It always seems
pretty calm to me
here in the
afternoon.
It is absolutely
nuts here
between 8 and
10 AM!
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
26
How often and for how long do
you need to collect data?
• Frequency – the period of time in which you collect data (i.e., how often will
you dip into the process to see the variation that exists?)
• Moment by moment (continuous monitoring)?
• Every hour?
• Every day? Once a week? Once a month?
• Duration – how long you need to continue collecting data
• Do you collect data on an on-going basis and not end until the measure is always
at the specified target or goal?
• Do you conduct periodic audits?
• Do you just collect data at a single point in time to “check the pulse of the
process”
• Do you need to pull a sample or do you take every occurrence of the data
(i.e., collect data for the total population)
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
The Frequency of Data Collection:
The Story of Flight #1549, January 15, 2009
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
27
Flight #1549, January 15, 2009
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
The Question:
Does the airline need data at a fixed point in time
or data over time?
Static View (Where are we
right now?)
It depends on the
question you are
trying to answer!
Bird hit plane!
Dynamic View
(What happened over time?)
The idea for this example was proposed
by Alastair Philip, Quality improvement
Scotland, 2010
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
28
Exercise: Data Collection Strategies
(frequency, duration and sampling)
•
This exercise has been designed to test your knowledge
of and skill with developing a data collection plan.
•
In the table on the next page is a list of eight measures.
•
For each measure identify:
•
–
The frequency and duration of data collection.
–
Whether you would pull a sample or collect all the data on each
measure.
–
If you would pull a sample of data, indicate what specific type of
sample you would pull.
Spend a few minutes working on your own then compare
your ideas with others at your table.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Exercise: Data Collection Strategies
(frequency, duration and sampling)
The need to know, the criticality of the measure and the amount of data required to make a
conclusion should drive the frequency, duration and whether you need to sample decisions.
Measure
Frequency and
Duration
Pull a sampling or take
every occurrence?
Vital signs for a patient connected to full telemetry in
the ICU
Blood pressure (systolic and diastolic) to determine if
the newly prescribed medication and dosage are
having the desired impact
Percent compliance with a hand hygiene protocol
Cholesterol levels (LDL, HDL, triglycerides) in a patient
recently placed on new statin medication
Patient satisfaction scores on the inpatient units
Central line blood stream infection rate
Percent of inpatients readmitted within 30 days for the
same diagnosis
Percent of surgical patients given prophylactic
antibiotics within 1 hour prior to surgical incision
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
29
The Quality Measurement Journey
AIM
(Why are you measuring?)
Concept
Measure
Operational Definitions
Data Collection Plan
Data Collection
Analysis
ACTION
59
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
You have performance
data.
Now what the heck do
you do with it?
60
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
30
Old Way versus the New Way
Threshold
Action
taken
here
No
action
taken
here
Better
Quality
Worse
Action taken
on all
occurrences
Better
Old Way
(Quality Assurance)
Quality
Worse
New Way
(Quality Improvement)
61
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Understanding Variation Statistically
“If I had to reduce
my message for
management to just
a few words, I’d say
it all had to do with
reducing variation.”
W. Edwards Deming
62
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
31
The Problem
Aggregated data presented in
tabular formats or with summary
statistics, will not help you
measure the impact of process
improvement efforts. Aggregated
data can only lead to judgment,
not to improvement.
63
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Thin-Slicing
“Thin-slicing refers to the ability of our
unconscious to find patterns in situations
and behavior based on very narrow slices
of experience.” Malcolm Gladwell, blink, page 23
When most people look at data they thin-slice it. That
is, they basically use their unconscious to find patterns
and trends in the data. They look for extremely high or
low data points and then make conclusions about
performance based on limited data. R. Lloyd
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
32
20-20 Hindsight!
“Managing a
process on the
basis of monthly
averages is like
trying to drive a
car by looking in
the rear view
mirror.”
Source: D. Wheeler
Understanding Variation,
Knoxville, TN, 1993.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Average CABG Mortality
Before and After the Implementation of a New Protocol
WOW!
Percent Mortality
5.2
A “significant drop”
from 5% to 4%
5.0%
4.0%
3.8
Time 1
Time 2
Conclusion -The protocol was a success!
A 20% drop in the average mortality!
66
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
33
Average CABG Mortality
Before and After the Implementation of a New Protocol A Second Look at the Data
Percent Mortality
9.0
Protocol implemented here
UCL= 6.0
5.0
CL = 4.0
LCL = 2.0
1.0
24 Months
Now what do you conclude about the impact of the protocol?
67
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Measure
The average of a set of numbers can be
created by many different distributions
X (CL)
Time
68
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
34
If you don’t understand the variation that lives
in your data, you will be tempted to ...
• Deny the data (It doesn’t fit my view of reality!)
• See trends where there are no trends
• Try to explain natural variation as special
events
• Blame and give credit to people for things over
which they have no control
• Distort the process that produced the data
• Kill the messenger!
69
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
The
to understanding quality performance,
therefore, lies in understanding variation over
time not in preparing aggregated data and
calculating summary statistics!
70
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
35
Dr. Walter A Shewhart
Shewhart - Economic Control of Quality of
Manufactured Product, 1931
“A phenomenon will
be said to be
controlled when,
through the use of
past experience, we
can predict, at least
within limits, how the
phenomenon may be
expected to vary in
the future”
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
“What is the variation in one system over
time?” Walter A. Shewhart - early 1920’s, Bell Laboratories
Dynamic View
Static View
time
Every process displays variation:
• Controlled variation
stable, consistent pattern of variation
“chance”, constant causes
Static View
72
UCL
LCL
• Special cause variation
“assignable”
pattern changes over time
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
36
Types of Variation
Common Cause Variation
Special Cause Variation
•
Is inherent in the design of the
process
•
•
Is due to regular, natural or ordinary
causes
Is due to irregular or unnatural
causes that are not inherent in the
design of the process
•
Affect some, but not necessarily all
aspects of the process
•
Results in an “unstable” process
that is not predictable
•
Also known as non-random or
assignable variation
•
Affects all the outcomes of a process
•
Results in a “stable” process that is
predictable
•
Also known as random or
unassignable variation
73
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Point …
Common Cause does not mean “Good Variation.” It only means that
the process is stable and predictable. For example, if a patient’s
systolic blood pressure averaged around 165 and was usually between
160 and 170 mmHg, this might be stable and predictable but completely
unacceptable.
Similarly Special Cause variation should not be viewed as “Bad
Variation.” You could have a special cause that represents a very good
result (e.g., a low turnaround time), which you would want to emulate.
Special Cause merely means that the process is unstable and
unpredictable.
You have to decide if the output of the
process is acceptable!
74
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
37
Decision Tree for Managing Variation
© Richard Scoville & I.H.I.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Statistically relevant variation in “real” time
Alert signalled
Plot goes up
when there is a
death
Down when a
patient
survives
Plot can never fall
below zero
38
Responding to alerts in real time
Continuous
background
monitoring
Initial analysis
Alert!!
Central
clinical review
Local, detailed
clinical
review
No issue
Artefact
Improvement
plan
Special Cause Variation
Common Cause Variation
Percent
of Cesarean
Sections
Performed Dec 95 - Jun 99
35.0
Deaths
from
accidental
causes
CL=18.0246
15.0
10.0
LCL=8.3473
5.0
6/99
4/99
2/99
8/98
12/98
6/98
10/98
4/98
2/98
8/97
12/97
6/97
10/97
4/97
2/97
8/96
12/96
6/96
10/96
4/96
2/96
12/95
0.0
month
Normal Sinus Rhythm (a.k.a.
Common Cause Variation)
Referrals per week
20.0
Number of Medications Errors per 1000 Patient Days
UCL=27.7018
25.0
Percent C-sections
22.5
Medication Error Rate
20.0
30.0
Deaths/ month
Referral rate
17.5
15.0
UCL=13.39461
12.5
10.0
7.5
5.0
CL=4.42048
2.5
0.0
LCL=0.00000
Week
Atrial Flutter Rhythm (a.k.a.
Special Cause Variation)
78
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
39
Attributes of a Leader Who
Understands Variation
Leaders understand the different ways that variation is viewed.
They explain changes in terms of common causes and special
causes.
They use graphical methods to learn from data and expect
others to consider variation in their decisions and actions.
They understand the concept of stable and unstable processes
and the potential losses due to tampering.
Capability of a process or system is understood before
changes are attempted.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Dialogue #2
Common and Special Causes of Variation
• Select several measures your organization
tracks on a regular basis.
Percent
of Cesarean Sections Performed Dec 95 - Jun 99
35.0
30.0
UCL=27.7018
Percent C-sections
25.0
20.0
CL=18.0246
15.0
10.0
LCL=8.3473
6/99
4/99
2/99
8/98
6/98
4/98
2/98
12/98
10/98
8/97
6/97
4/97
2/97
12/97
10/97
8/96
12/96
10/96
6/96
4/96
m onth
Number of Medications Errors per 1000 Patient Days
22.5
• If not, what criteria do you use to
determine if data are improving or
getting worse?
2/96
0.0
12/95
• Do you and the leaders of your
organization evaluate these measures
according the criteria for common and
special causes of variation?
5.0
Medication Error Rate
20.0
17.5
15.0
UCL=13.39461
12.5
10.0
7.5
5.0
CL=4.42048
2.5
0.0
LCL=0.00000
Week
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
40
Understanding Variation Statistically
Unp la nned R etur ns to E d w /in 72 Ho urs
M o n th
M
A
M
J
J
A
S
O
N
D
J
F
M
A
M
J
J
A
S
E D /1 0 0 41.78 43.89 39.86 40.03 38.01 43.43 39.21 41.90 41.78 43.00 39.66 40.03 48.21 43.89 39.86 36.21 41.78 43.89 31.45
R e tu r n s
17
26
13
16
24
27
19
14
33
20
17
22
29
17
36
19
22
24
22
uu
uc
cc
cha
ha
harrrrtttt
u
ha
1 .2
1 .0
Rate per 100 ED Patients
UC L = 0. 88
0 .8
0 .6
Mean = 0.54
0 .4
0 .2
LC L = 0. 19
8
6
4
9
1
1
17
15
1
1
9
8
13
7
2
11
1
6
10
5
4
3
1
DYNAMIC VIEW
STATIC VIEW
Descriptive Statistics
Mean, Median & Mode
Minimum/Maximum/Range
Standard Deviation
Bar graphs/Pie charts
2
0 .0
Run Chart
Control Chart
(plot data over time)
Statistical Process Control (SPC)
81
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Elements of a Run Chart
6.00
The centerline (CL) on a
Run Chart is the Median
5.75
5.25
Pounds of Red Bag Waste
Measure
5.50
5.00
4.75
Median=4.610
~
4.50
X (CL)
4.25
4.00
3.75
3.50
3.25
1
2
3
4
5
6
7
Time
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
Point Number
Four simple run rules are used to determine if special
cause variation is present
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
41
What is a Run?
• One or more consecutive data points on the same side
of the Median
• Do not include data points that fall on the Median
How do we count the number of runs?
• Draw a circle around each run and count the
number of circles you have drawn
• Count the number of times the sequence of data
points crosses the Median and add “1”
83
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Run Chart: Medical Waste
How many runs are on this chart?
6.00
5.75
5.50
Pounds of Red Bag Waste
5.25
5.00
4.75
Median=4.610
4.50
4.25
4.00
Points on the Median
3.75
(don’t count these when counting
the number of runs)
3.50
3.25
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
Point Number
84
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
42
Run Chart: Medical Waste
How many runs are on this chart?
6.00
5.75
14 runs
5.50
Pounds of Red Bag Waste
5.25
5.00
4.75
Median=4.610
4.50
4.25
4.00
Points on the Median
3.75
(don’t count these when counting
the number of runs)
3.50
3.25
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
Point Number
85
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Rules to Identify non-random patterns in the
data displayed on a Run Chart
• Rule #1: A shift in the process, or too many data points in
a run (6 or more consecutive points above or below the
median)
• Rule #2: A trend (5 or more consecutive points all
increasing or decreasing)
• Rule #3: Too many or too few runs (use a table to
determine this one)
• Rule #4: An “astronomical” data point
86
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
43
Non-Random Rules for Run Charts
A Shift:
6 or more
Too many or
too few runs
A Trend
5 or more
An astronomical
data point
Source: The Data Guide by L. Provost and S. Murray, Austin, Texas, February, 2007: p3-10.
87
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
This is NOT a trend!
88
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
44
Rule #3: Too many or too few runs
• If our change is successful, we will have
disrupted the previous process
• How many runs should we expect if the
values all come from a stable process?
• If there are fewer runs (or more), we have
evidence that our change has made a difference
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Rule #3: Too few or too many runs
Use this table by first calculating the number of "useful observations" in your data set. This is
done by subtracting the number of data points on the median from the total number of data
points. Then, find this number in the first column. The lower number of runs is found in the
second column. The upper number of runs can be found in the third column. If the number of
runs in your data falls below the lower limit or above the upper limit then this is a signal of a
special cause.
# of Useful
Lower Number
Observations
of Runs
15
5
16
5
17
5
18
6
19
6
20
6
21
7
22
7
23
7
24
8
25
8
26
9
27
10
28
10
29 Total data points with
10
30 2 on the median
11
90
Upper Number
of Runs
12
13
13
14
15
16
16
17
17
18
18
19
19 Total
20
20
21
useful observations
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
45
Source: Swed, F. and
Eisenhart, C. (1943)
“Tables for Testing
Randomness of
Grouping in a Sequence
of Alternatives.” Annals
of Mathematical
Statistics. Vol. XIV, pp.
66-87, Tables II and III.
91
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Rule #4: An Astronomical Data Point
25 Men and a Test
What
25
do you think about this data point?
Is it astronomical?
Score
20
15
10
5
0
1
3
5
7
9
11
13
15
17
19
21
23
25
Individuals
92
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
46
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Run Chart Exercise
Now it is your turn!
• Interpret the following run charts
• Is the median in the correct location?
• What do you learn by applying the run chart rules?
1. % of patients with Length of Stay shorter than six days
2. Average Length of Stay DRG373
3. Number of Acute Surgical Procedures
Source: Peter Kammerlind, (Peter.Kammerlind@lj.se), Project Leader
Jönköping County Council, Jonkoping, Sweden.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
47
% of patients with Length of Stay shorter than six days
Antal patienter med vårdtid < 6dygn i % vid primär elektiv knäplastik
(operationsdag= dag1)
90
80
Antal patienter i %
70
60
50
40
30
20
10
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Månad
Source: Peter Kammerlind, (Peter.Kammerlind@lj.se), Project Leader
Jönköping County Council, Jonkoping, Sweden.
Mäta för att leda
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Sekvensdiagram DRG 373
Average Length of Stay DRG 373
3,5
3
2
1,5
1
0,5
Ju
li
Au
gu
st
i
Se
pt
em
be
r
Ju
ni
M
aj
Ap
ril
M
ar
s
Fe
br
ja
nu
ar
i
Ju
li
Au
gu
st
i
Se
pt
em
be
r
O
kt
ob
er
N
ov
em
be
r
D
ec
em
be
r
M
aj
Ju
ni
Ap
ril
M
ar
s
Fe
br
20
06
ja
nu
ar
i
0
20
05
Vårddygn
2,5
Tidsenhet
Source: Peter Kammerlind, (Peter.Kammerlind@lj.se), Project Leader
Jönköping County Council, Jonkoping, Sweden.
Grundläggande statistik och analys
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
48
Number of Acute Surgical Procedures
Akut kirurgi
160
150
Antal operationer
140
130
120
110
100
90
80
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Månad
Source: Peter Kammerlind, (Peter.Kammerlind@lj.se), Project Leader
Jönköping County Council, Jonkoping, Sweden.
Mäta för att leda
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Why are Control Charts preferred
over Run Charts?
Because Control Charts…
1.
Are more sensitive than run charts
A run chart cannot detect special causes that are due to point-to-point
variation (median versus the mean)
Tests for detecting special causes can be used with control charts
2.
Have the added feature of control limits, which allow us to
determine if the process is stable (common cause variation)
or not stable (special cause variation).
3.
Can be used to define process capability.
4.
Allow us to more accurately predict process behavior and
future performance.
98
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
49
Elements of a Control Chart
50.0
UCL=44.855
45.0
A
An indication
of a special cause
40.0
Number of Complaints
Measure
B
(Upper Control
Limit)
35.0
C
30.0
CL=29.250
C
25.0
X (Mean)
B
20.0
A
15.0
LCL=13.645
(Lower Control
Limit)
10.0
5.0
Jan01
Mar01
May01
July01
Sept01
Nov01
Time
Jan02
Mar02
May02
July02
Sept02
Nov02
Month
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
How do we classify Variation
on control charts?
Common Cause Variation
•
Is inherent in the design of the
process
•
Is due to regular, natural or
ordinary causes
•
Affects all the outcomes of a
process
•
Results in a “stable” process that
is predictable
•
Also known as random or
unassignable variation
Special Cause Variation
•
Is due to irregular or unnatural
causes that are not inherent in the
design of the process
•
Affect some, but not necessarily all
aspects of the process
•
Results in an “unstable” process
that is not predictable
•
Also known as non-random or
assignable variation
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
50
How do I use the Zones
on a control chart?
Measure
UCL
Zone A
+3 SL
Zone B
+2 SL
Zone C
+1 SL
Zone C
-1 SL
Zone B
-2 SL
Zone A
-3 SL
X (CL)
LCL
Time
101
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Rules for Special Causes on Control Charts
There are many rules to detect special cause. The following five
rules are recommended for general use and will meet most
applications of control charts in healthcare.
Rule #1: 1 point outside the +/- 3 sigma limits
Rule #2: 8 successive consecutive points above (or below) the centerline
Rule #3: 6 or more consecutive points steadily increasing or decreasing
Rule #4: 2 out of 3 successive points in Zone A or beyond
Rule #5: 15 consecutive points in Zone C on either side of the centerline
102
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
51
Rules for
Detecting
Special Causes
3.
4.
1.
2.
103
5.
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Notes on Special Cause Rules
Rule #1: 1 point outside the +/- 3 sigma limits
A point exactly on a control limit is not considered outside the limit . When there is not a
lower or upper control limit Rule 1 does not apply to the side missing the limit.
Rule #2: 8 successive consecutive points above (or below) the centerline
A point exactly on the centerline does not cancel or count towards a shift.
Rule #3: 6 or more consecutive points steadily increasing or decreasing
Ties between two consecutive points do not cancel or add to a trend. When control charts
have varying limits due to varying numbers of measurements within subgroups, then rule
#3 should not be applied.
Rule #4: 2 out of 3 successive points in Zone A or beyond
When there is not a lower or upper control limit Rule 4 does not apply to the side missing a
limit.
Rule #5: 15 consecutive points in Zone C on either side of the centerline
This is known as “hugging the centerline”
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
52
Using the Zones on a control chart
Measure
UCL
Zone A
+3 SL
Zone B
+2 SL
Zone C
+1 SL
Zone C
-1 SL
Zone B
-2 SL
Zone A
-3 SL
X (CL)
LCL
Time
105
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Applying
Control Chart Rules:
You Make the Call!
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
53
Is there a Special Cause on this chart?
Unp la nne d R e turns to E d w /in 7 2 H o urs
M o n th
M
A
M
J
J
A
S
O
N
D
J
F
M
A
M
J
J
A
S
E D /1 0 0 41.78 43.89 39.86 40.03 38.01 43.43 39.21 41.90 41.78 43.00 39.66 40.03 48.21 43.89 39.86 36.21 41.78 43.89 31.45
R e tu rn s
17
26
13
16
24
27
19
14
33
20
17
22
29
17
36
19
22
24
22
u cha rt
1 .2
1 .0
0 .8
0 .6
Mean = 0.54
0 .4
0 .2
LC L = 0. 1 9
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0 .0
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
PERCENT PATIENTS C/O CHEST PAIN SEEN BY
CARDIOLOGIST WITHIN 10 MINUTES OF ARRIVAL TO ED
EXAMPLE CHART
What special cause is on this chart?
120%
110%
Percent Patients Seen 10 Minutes
Rate per 100 ED Patients
U C L = 0.88
100%
90%
81.5 %
80%
70%
60%
50%
Target Goal / Desired Direction:
INCREASE in the PERCENT of patients c/o chest
pain seen by cardiologist within 10 minutes of
arrival to Emergency Department.
Interpretation: Current performance shows
(desirable) upward trend.
40%
30%
20%
10%
P
0%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Ma rc h 25, 2004
20
21
22
23
24
UCL
Ave ra ge
XYZ Medical Center
P e rf orma nc e Improve me nt Re port
19
24 Weeks: October 2003 -- March 2004
LCL
p-chart, possible range 0-100%
Fic t it ious da t a for e duc a t iona l purpose s
108
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
54
Number of Patient Complaints by Month
(XmR chart)
Are there any special causes present? If so, what are they?
50.0
UCL=44.855
45.0
A
40.0
Number of Complaints
B
35.0
C
30.0
CL=29.250
C
25.0
B
20.0
A
15.0
LCL=13.645
10.0
5.0
Jan01
Mar01
May01
July01
Sept01
Nov01
Jan02
Mar02
May02
July02
Sept02
Nov02
Month
109
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Appropriate Management Response to
Common & Special Causes of Variation
Is the process stable?
NO
YES
Type of variation
Right Choice
Wrong Choice
Consequences of
making the wrong
choice
Only Common
Change the
process
Treat normal variation as a
special cause (tampering)
Increased
variation!
Special + Common
Investigate the origin of the
special cause
Change the process
Wasted
resources!
110
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
55
SPC
Sphere of
Knowledge
I know the right
chart has to be
hiding in here
somewhere!
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
The choice of a Control Chart depends on
the Type of Data you have collected
Variables Data
Attributes Data
Defectives
Defects
(occurrences plus
non-occurrences)
Nonconforming Units
(occurrences only)
Nonconformities
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
56
There Are 7 Basic Control Charts
Variables Charts
Attributes Charts
• X & R chart
• p-chart
(proportion or percent of
defectives)
(average & range chart)
• X & S chart
• np-chart
(average & SD chart)
(number of defectives)
• XmR chart
• c-chart
(individuals & moving range chart)
(number of defects)
• u-chart
(defect rate)
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
The Control Chart Decision Tree
Source: Carey, R. and Lloyd, R. Measuring Quality Improvement in Healthcare: A Guide to Statistical
Process Control Applications. ASQ Press, Milwaukee, WI, 2001.
Decide on the type
of data
Variables Data
Yes
More than one
observation per
subgroup?
No
No
< than 10
observations
per subgroup?
Yes
X bar & R
Average and
Range
No
X bar & S
Average and
Standard
Deviation
Yes
XmR
Individual
Measurement
Attributes Data
Is there an
equal area of
opportunity?
c-chart
The number
of Defects
u-chart
The Defect
Rate
Occurrences &
Nonoccurrences?
No
No
Yes
Are the
subgroups of
equal size?
p-chart
The percent of
Defective Units
Yes
np-chart
The number of
Defective Units
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
57
Key Terms for Control Chart Selection
Subgroup
Observation
Area of Opportunity
How you organize you data
(e.g., by day, week or month)
The actual value (data) you
collect
Applies to all attributes or
counts charts
The label of your horizontal
axis
The label of your vertical axis
Defines the area or frame in
which a defective or defect can
occur
Can be patients in
chronological order
Can be of equal or unequal
sizes
May be single or multiple data
points
Applies to all the charts
Can be of equal or unequal
sizes
Applies to all the charts
115
31
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
The choice of a control chart depends on the
question you are trying to answer!
Type of Chart
Medication Production Analysis
X bar & S Chart
What is the TAT for a daily sample of 25 medication
orders?
Individuals Chart
(XmR)
How many medication orders do we process each
week?
C-Chart
Using a sample of 100 medication orders each week,
how many errors (defects) are observed?
U-Chart
Out of all medication orders each week, how many
errors (defects) are observed?
P-Chart
For all medication orders each week, what
percentage have 1 or more errors (i.e., are
defective)?
116
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
58
117
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
The Quality Measurement Journey
AIM
(Why are you measuring?)
Concept
Measure
Operational Definitions
Data Collection Plan
Data Collection
Analysis
ACTION
118
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
59
Okay, I found the right
chart in here, now all I
have to do is to find the
right improvement
strategy!
PDSA
Sphere of
Knowledge
119
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
A Simple Improvement Plan
1.
Which process do you want to improve or redesign?
2.
Does the process contain non-random patterns or
special causes?
3.
How do you plan on actually making improvements? What
strategies do you plan to follow to make things better?
4.
What effect (if any) did your plan have on the process
performance?
Run & Control Charts
will help you answer Questions 2 & 4.
YOU need to figure out the answers to Questions 1 & 3.
120
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
60
Wait Time to See the Doctor
XmR Chart
February
30.0
April
27.5
Intervention
25.0
22.5
Minutes
20.0
Where will the
process go?
17.5
UCL=15.3
A
15.0
B
12.5
C
CL=10.7
C
B
A
LCL=6.1
10.0
7.5
Baseline Period
5.0
2.5
1
3
2
5
4
7
6
9
8
Freeze the Control Limits and Centerline, extend them
23 compare
25
27 the29
and
new31process performance to these
22
24
26
28
30
32
reference lines to determine if a special cause has been
16 Patients in February and 16 Patients in April
introduced as a result of the intervention.
11
10
13
12
15
14
17
16
19
18
21
20
121
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Wait Time to See the Doctor
XmR Chart
February
30.0
April
27.5
Intervention
25.0
Freeze the Control Limits and compare the
new process performance to the baseline
using the UCL, LCL and CL from the
baseline period as reference lines
22.5
Minutes
20.0
17.5
UCL=15.3
A
15.0
B
12.5
C
CL=10.7
C
B
A
LCL=6.1
10.0
7.5
Baseline Period
5.0
2.5
1
3
2
5
4
7
6
9
8
11
10
13
12
15
14
17
16
19
18
21
20
23
22
25
24
27
26
29
28
A Special Cause is
detected
A run of 8 or more data
points on one side of
the centerline
reflecting a sift in the
process
31
30
32
16 Patients in February and 16 Patients in April
122
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
61
Wait Time to See the Doctor
XmR Chart
February
30.0
April
27.5
Intervention
25.0
Make new control limits for the
process to show the
improvement
22.5
Minutes
20.0
17.5
UCL=15.3
A
15.0
B
12.5
C
CL=10.7
C
B
A
LCL=6.1
10.0
7.5
Baseline Period
5.0
2.5
1
3
2
5
4
7
6
9
8
11
10
13
12
15
14
17
16
19
18
21
20
23
22
25
24
27
26
29
28
31
30
32
16 Patients in February and 16 Patients in April
123
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
“Change is possible if
we have the desire and
commitment to make it
happen.”
Mohandas Gandhi
124
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
62
The Primary Drivers of
Improvement
Having the Will (desire) to change the current state to one that is better
Will
Developing Ideas
that will
contribute to
making processes
and outcome
better
QI
Ideas
Execution
Having the
capacity to apply
CQI theories, tools
and techniques
that enable the
Execution of the
ideas
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
How prepared is your Organization?
Key Components*
• Will (to change)
• Ideas
• Execution
Self-Assessment
• Low
• Low
Medium High
Medium High
• Low
Medium High
*All three components MUST be viewed together. Focusing on one or
even two of the components will guarantee suboptimized performance.
Systems thinking lies at the heart of CQI!
126
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
63
The Sequence of Improvement
Make part of
routine
operations
Test under a
variety of
conditions
Theory and
Prediction
Sustaining and
improvements and
Spreading a changes
to other locations
Implementing
a change
Act
Plan
Study
Do
Testing a
change
Developing a
change
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Summary of Key Points
• Understand why you are measuring
• Improvement, Accountability, Research
• Build skills in the area of data collection
• Operational definitions
• Stratification & Sampling (probability versus non-probability)
• Build
knowledge on the nature of variation
• Understanding variation conceptually
• Understanding variation statistically
• Know when and how to use Run & Control Charts
128
• Numerous types of charts (based on the type of data)
• The mean is the centerline (not the median which is on a run chart)
• Upper and Lower Control Limits are sigma limits
• Rules to identify special and common causes of variation
• Linking the charts to improvement strategies
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
64
“Quality begins
with intent, which
is fixed by
management.”
W. E. Deming, Out of the Crisis, p.5
129
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Thanks for joining us!
Please contact us if you have any
questions:
Robert Lloyd
rlloyd@ihi.org
Mary Seddon
mary.seddon@middlemore.co.nz
Richard Hamblin
richard.hamblin@hqsc.govt.nz
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
65
Appendices
• Appendix A: General References on Quality
• Appendix B: References on Measurement
• Appendix C: References on Spread
• Appendix D: References on Rare Events
• Appendix E: Basic “Sadistical” Principles
131
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Appendix A
General References on Quality
•
The Improvement Guide: A Practical Approach to Enhancing Organizational
Performance. G. Langley, K. Nolan, T. Nolan, C. Norman, L. Provost. Jossey-Bass
Publishers., San Francisco, 1996.
•
Quality Improvement Through Planned Experimentation. 2nd edition. R. Moen, T.
Nolan, L. Provost, McGraw-Hill, NY, 1998.
•
The Improvement Handbook. Associates in Process Improvement. Austin, TX,
January, 2005.
•
A Primer on Leading the Improvement of Systems,” Don M. Berwick, BMJ, 312: pp
619-622, 1996.
•
“Accelerating the Pace of Improvement - An Interview with Thomas Nolan,” Journal
of Quality Improvement, Volume 23, No. 4, The Joint Commission, April, 1997.
132
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
66
Appendix B
References on Measurement
•
Brook, R. et. al. “Health System Reform and Quality.” Journal of the
American Medical Association 276, no. 6 (1996): 476-480.
•
Carey, R. and Lloyd, R. Measuring Quality Improvement in healthcare: A
Guide to Statistical Process Control Applications. ASQ Press, Milwaukee,
WI, 2001.
•
Langley, G. et. al. The Improvement Guide. Jossey-Bass Publishers, San
Francisco, 1996.
•
Lloyd, R. Quality Health Care: A Guide to Developing and Using Indicators.
Jones and Bartlett Publishers, Sudbury, MA, 2004.
•
Nelson, E. et al, “Report Cards or Instrument Panels: Who Needs What?
Journal of Quality Improvement, Volume 21, Number 4, April, 1995.
•
Solberg. L. et. al. “The Three Faces of Performance Improvement:
Improvement, Accountability and Research.” Journal of Quality
Improvement 23, no.3 (1997): 135-147.
133
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Appendix C
References on Spread
•
Gladwell, M. The Tipping Point. Boston: Little, Brown and Company, 2000.
•
Kreitner, R. and Kinicki, A. Organizational Behavior (2nd ed.) Homewood, Il: Irwin, 1978.
•
Lomas J, Enkin M, Anderson G. Opinion Leaders vs Audit and Feedback to Implement
Practice Guidelines. JAMA, Vol. 265(17); May 1, 1991, pg. 2202-2207.
•
Myers, D.G. Social Psychology (3rd ed.) New York: McGraw-Hill, 1990.
•
Prochaska J., Norcross J., Diclemente C. In Search of How People Change, American
Psychologist, September, 1992.
•
Rogers E. Diffusion of Innovations. New York: The Free Press, 1995.
•
Wenger E. Communities of Practice. Cambridge, UK: Cambridge University Press,
1998.
134
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
67
Appendix D
References for Rare Events Control Charts
•
Benneyan, JC, Lloyd, RC and Plsek, PE. “Statistical process control as a tool for
research and healthcare improvement”, Quality and Safety in Health Care
12:458-464.
•
Benneyan JC and Kaminsky FC (1994): "The g and h Control Charts: Modeling
Discrete Data in SPC", ASQC Annual Quality Congress Transactions, 32-42.
•
Jackson JE (1972): "All Count Distributions Are Not Alike", Journal of Quality
Technology 4(2):86-92.
•
Kaminsky FC, Benneyan JC, Davis RB, and Burke RJ (1992): "Statistical Control
Charts Based on a Geometric Distribution", Journal of Quality Technology
24(2):63-69.
•
Nelson LS (1994): "A Control Chart for Parts-Per-Million Nonconforming Items",
Journal of Quality Technology 26(3):239-240.
135
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
Appendix E
A few basic “Sadistical” Principles
Statistics related to depicting variation
The sum of the deviations (xi – x ) of a set of observations
about their mean is equal to zero.
The average deviation (AD) is obtained by adding the
absolute values of the deviations of the individual values
from their mean and dividing by n.
The sample variance (s2) is the average of the squares of
the deviations of the individual values from their mean.
Which finally leads us to our good old friend, the standard
deviation, which is the positive square root of the variance.
Σ (xi – x ) = 0
AD =
s2 =
Σ | xi – x |
n
Σ ( xi – x ) 2
n -1
(See the next page for
this fun formula!)
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
68
Appendix E (continued)
What is the Standard Deviation?
2
Σ (X
( i – X)
n-1
The Standard Deviation
(sd) is created by taking
the deviation of each
individual data point (Xi)
from the mean (X),
squaring each difference,
summing the results,
dividing by the number of
cases and then taking the
square root.
137
©Copyright 2010 Institute for Healthcare Improvement/R. Lloyd
69
Download