Documentation errors are probably driving this data - X

advertisement
So many questions,
So much data, So little time
A Journey with Surgical Outcome Data
Alison M. Knight, P.E.
IIE 2014 National Conference
©2014 MFMER | slide-1
©2014 MFMER | slide-1
Agenda
• Tutorial in quality databases
• Background information on project
• Strategy and process for report creation
• Example of report and metrics chosen
©2014 MFMER | slide-2
Quick Tutorial in Surgical Quality
Databases
The dirty secrets THEY don’t tell you
©2014 MFMER | slide-3
Lesson #1: Quality databases are NOT
created equal
Clinical Abstracted vs
Administrative Data
Outcome Definition
Database
Attributes
Data Audit Process
Sample vs. Population
©2014 MFMER | slide-4
Lesson #2: Use all databases, but choose
metrics carefully
Example #1:
Mortality &
Length of Stay
Administrative Data &
All Cases
Example #2:
Surgical Site Infection &
Urinary Tract Infection
Clinically
Abstracted Data
& Sample of
Cases
©2014 MFMER | slide-5
Lesson #3: Data denial will happen, Be ready!
Five Stages of Data Denial*
1. The data is wrong
2. The data is old
3. We have changed already
4. This is a different place
5. We tried that already
*Borrowed from Dr. Brian Postl, CEO of the Winnipeg Regional Health Authority Presentation
©2014 MFMER | slide-6
Let’s set the stage
©2014 MFMER | slide-7
The surgical practice is frustrated
New costs
• Each surgical specialty pays a portion
No perceived value
• Most surgical specialties believed this
quality data provided little value
Data is not trustworthy
• Surgical specialties didn’t believe the
data
©2014 MFMER | slide-8
Some background facts…
©2014 MFMER | slide-9
What is NSQIP?
• American College of Surgeons (ACS)’s National
Surgical Quality Improvement Program
• History
• Started in the Veterans Administration(1994)*
• Open to other hospitals (2004)*
• Mayo Clinic joined in 2006
• Clinically abstracted random sample of cases
*ACS NSQIP website: http://site.acsnsqip.org/program-specifics/nsqip-history/
©2014 MFMER | slide-10
What types of information is included and
how is it collected?
Uniform Operational
Definitions
135 Patient Variables
30 day Post-op
Assessment
Audit Process
©2014 MFMER | slide-11
Data Delivery to the Practice
• Twice a year the practice chair received a
“canned report” from NSQIP
• Each specialty had risk-adjusted odds ratios
• Ranged in granularity depending on specialty
• Highlighted the specialty’s decile rankings (1-10)
• Decile*: divides the distribution of the variable
into ten groups having equal frequencies.
*Source: dictionary.com
©2014 MFMER | slide-12
The Project
©2014 MFMER | slide-13
Project Goals
• Educate providers on database facts
Educate • Work through the stages of denial
• Provide data at more detailed level
Provide • Point providers to “hot spots”
• Aid specialty in developing datadriven quality projects
Change
©2014 MFMER | slide-14
Identified the Challenges
NSQIP costs money, value not recognized
“Why should I pay for NSQIP if I am not using it?”
Lack of understanding of data collection process:
“Documentation errors are probably driving this data”
Decile rankings are the focus
“These rankings can’t possibly be right, our patients
are sicker and we do more complicated procedures”
Specialty practices only were recognized for negative quality
“Leadership just doesn’t understand our practice”
©2014 MFMER | slide-15
The Plan to Create a Meaningful Dashboard
Practice
Input
Use in
Research
Dispel Myths
Make Data
Actionable
Goals
Focus on
Good & Bad
Address
Concerns
De-emphasize
Rankings
©2014 MFMER | slide-16
The process of development: PDSA-Like Cycle
• One specialty at a time
Create
• Analysis done by hand
• Review report for
completeness
• Obtain feedback from
each specialty for
improvements
Revise
Review
Present
©2014 MFMER | slide-17
A Sample Report
This data has been created for example purposes only and
does not reflect real Mayo Clinic data.
©2014 MFMER | slide-18
Components of the Report
• Generic information from NSQIP
• Data collection and risk-adjustment information
• Specialty specific information
• Without risk adjustment
• Example research articles using NSQIP data
• Recommendations to practice
©2014 MFMER | slide-19
Demographic Data Example*
*Data is fictitious and does not reflect Mayo Clinic performance.
©2014 MFMER | slide-20
Volume Analysis by CPT Code Example*
*Data is fictitious and does not reflect Mayo Clinic performance.
©2014 MFMER | slide-21
Occurrences: Raw Rates Example*
*Data is fictitious and does not reflect Mayo Clinic performance.
©2014 MFMER | slide-22
Case Volume versus Occurrence Volume*
*Data is fictitious and does not reflect Mayo Clinic performance.
©2014 MFMER | slide-23
Rankings Example*
*Data is fictitious and does not reflect Mayo Clinic performance.
©2014 MFMER | slide-24
Recommendations and Next Steps
• Based on the data, Outcome #1 and Outcome
#3 should be addressed.
• CPT Code Category #2 shows opportunity for
improvement
• Additional detailed information by request
• All case data pulled for specific CPT code
categories and/or cases with specific
outcomes
• Logistic regression analysis to recognize
factors that contribute certain outcomes
©2014 MFMER | slide-25
Current State and Next Steps
©2014 MFMER | slide-26
Where are we now?
• Stable report that practices find helpful
• Worked through 5 stages of data denial
• Creating automated process to provide
information in a timely manner
• Creating processes for follow-up and practice
accountability
©2014 MFMER | slide-27
Summary
• All databases are not created equal
• Use the correct type of database depending on
the information needed
• Address data denial
• De-emphasize rankings
• Create a report that magnifies the “hot spots”
for practices to launch quality projects
©2014 MFMER | slide-28
Questions & Discussion
©2014 MFMER | slide-29
Download